Archive

Articles taggués ‘monitoring’

How to set up web-based network traffic monitoring system on Linux

25/05/2016 Comments off

When you are tasked with monitoring network traffic on the local network, you can consider many different options to do it, depending on the scale/traffic of the local network, monitoring platforms/interface, types of backend database, etc.

ntopng is an open-source (GPLv3) network traffic analyzer which provides a web interface for real-time network traffic monitoring. It runs on multiple platforms including Linux and MacOS X. ntopng comes with a simple RMON-like agent with built-in web server capability, and uses Redis-backed key-value server to store time series statistics. You can install ntopng network traffic analyzer on any designated monitoring server connected to your network, and use a web browser to access real-time traffic reports available on the server.

In this tutorial, I will describe how to set up a web-based network traffic monitoring system on Linux by using ntopng.

Features of ntopng

  • Flow-level, protocol-level real-time analysis of local network traffic.
  • Domain, AS (Autonomous System), VLAN level statistics.
  • Geolocation of IP addresses.
  • Deep packet inspection (DPI) based service discovery (e.g., Google, Facebook).
  • Historical traffic analysis (e.g., hourly, daily, weekly, monthly, yearly).
  • Support for sFlow, NetFlow (v5/v9) and IPFIX through nProbe.
  • Network traffic matrix (who’s talking to who?).
  • IPv6 support.

Install ntopng on Linux

The official website offers binary packages for Ubuntu and CentOS. So if you use either platform, you can install these packages.

If you want to build the latest ntopng from its source, follow the instructions below. (Update: these instructions are valid for ntopng 1.0. For ntopng 1.1 and higher, see the updated instructions).

To build ntopng on Debian, Ubuntu or Linux Mint:

$ sudo apt-get install libpcap-dev libglib2.0-dev libgeoip-dev redis-server wget libxml2-dev
$ tar xzf ntopng-1.0.tar.gz -C ~
$ cd ~/ntopng-1.0/
$ ./configure
$ make geoip
$ make

In the above steps, « make geoip » will automatically download a free version of GeoIP databases with wget from maxmind.com. So make sure that your system is connected to the network.

To build ntopng on Fedora:

$ sudo yum install libpcap-devel glib2-devel GeoIP-devel libxml2-devel libxml2-devel redis wget
$ tar xzf ntopng-1.0.tar.gz -C ~
$ cd ~/ntopng-1.0/
$ ./configure
$ make geoip
$ make

To install ntopng on CentOS or RHEL, first set up EPEL repository, and then follow the same instructions as in Fedora above.

Configure ntopng on Linux

After building ntopng, create a configuration directory for ntopng, and prepare default configuration files as follows. I assume that « 192.168.1.0/24 » is the CIDR address prefix of your local network.

$ sudo mkir /etc/ntopng -p
$ sudo -e /etc/ntopng/ntopng.start
--local-networks "192.168.1.0/24"
--interface 1
$ sudo -e /etc/ntopng/ntopng.conf
-G=/var/run/ntopng.pid

Lire la suite…

How to monitor a Linux server and desktop remotely from web browser

24/05/2016 Comments off

Monitoring Linux server

When it comes to monitoring a Linux server, there are more than enough options to choose from. While there are many production-quality monitoring solutions (e.g., Nagios, Zabbix, Zenoss), boasting of fancy UI, monitoring scalability, comprehensive reporting capabilities, etc., these solutions are probably an overkill for most of us end users. If all you need is to check the basic status (e.g., CPU load, memory usage, active processes, disk usage) of a remote Linux server or desktop, consider linux-dash.

linux-dash is a web-based lightweight monitoring dashboard for Linux machines, which can display, in real-time, various system properties, such as CPU load, RAM usage, disk usage, Internet speed, network connections, RX/TX bandwidth, logged-in users, running processes etc. linux-dash does not come with any backend database for storing long-term statistics. Simply drop in linux-dash app in an existing web server (e.g., Apache, Nginx), and you are good to go. It is a quick and easy way to set up remote monitoring for personal projects.

In this tutorial, I am going to describe how to set up linux-dash in Nginx web server on Linux. Nginx is preferred over Apache web server due to its lightweight engine.

Set up linux-dash on Debian, Ubuntu or Linux Mint

First, install Nginx web server with php-fpm.

$ sudo apt-get install git nginx php5-json php5-fpm php5-curl

Configure Nginx for linux-dash app by creating /etc/nginx/conf.d/linuxdash.conf as follows. In this example, we are going to use port 8080.

$ sudo vi /etc/nginx/conf.d/linuxdash.conf
server {
 server_name $domain_name;
 listen 8080;
 root /var/www;
 index index.html index.php;
 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;
 
 location ~* .(?:xml|ogg|mp3|mp4|ogv|svg|svgz|eot|otf|woff|ttf|css|js|jpg|jpeg|gif|png|ico)$ {
 try_files $uri =404;
 expires max;
 access_log off;
 add_header Pragma public;
 add_header Cache-Control "public, must-revalidate, proxy-revalidate";
 }
 
 location /linux-dash {
 index index.html index.php;
 }
 
 # PHP-FPM via sockets
 location ~ .php(/|$) {
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 fastcgi_split_path_info ^(. ?.php)(/.*)$;
 fastcgi_pass unix:/var/run/php5-fpm.sock;
 if (!-f $document_root$fastcgi_script_name) {
 return 404;
 }
 try_files $uri $uri/ /index.php?$args;
 include fastcgi_params;
 }
}

Disable the default site configuration.

$ sudo rm /etc/nginx/sites-enabled/default

Configure php-fpm by editing /etc/php5/fpm/pool.d/www.conf. Make sure to edit « user« , « group » and « listen » directives as shown below. You can keep the rest of the configuration unchanged.

$ sudo vi /etc/php5/fpm/pool.d/www.conf

. . .
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
. . .

Proceed to download and install linux-dash.

$ git clone https://github.com/afaqurk/linux-dash.git
$ sudo cp -r linux-dash/ /var/www/
$ sudo chown -R www-data:www-data /var/www

Restart Nginx web server as well as php5-fpm to finalize installation.

$ sudo service php5-fpm restart
$ sudo service nginx restart

Lire la suite…

How to sniff HTTP traffic from the command line on Linux

23/05/2016 Comments off

sniff HTTP trafficSuppose you want to sniff live HTTP web traffic (i.e., HTTP requests and responses) on the wire for some reason. For example, you may be testing experimental features of a web server. Or you may be debugging a web application or a RESTful service. Or you may be trying to troubleshoot PAC (proxy auto config) or check for any malware files surreptitiously downloaded from a website. Whatever the reason is, there are cases where HTTP traffic sniffing is helpful, for system admins, developers, or even end users.

While packet sniffing tools such as tcpdump are popularly used for live packet dump, you need to set up proper filtering to capture only HTTP traffic, and even then, their raw output typically cannot be interpreted at the HTTP protocol level so easily. Real-time web server log parsers such as ngxtopprovide human-readable real-time web traffic traces, but only applicable with a full access to live web server logs.

What will be nice is to have tcpdump-like sniffing tool, but targeting HTTP traffic only. In fact, httpryis extactly that: HTTP packet sniffing tool. httpry captures live HTTP packets on the wire, and displays their content at the HTTP protocol level in a human-readable format. In this tutorial, let’s see how we can sniff HTTP traffic with httpry.

Install httpry on Linux

On Debian-based systems (Ubuntu or Linux Mint), httpry is not available in base repositories. So build it from the source:

$ sudo apt-get install gcc make git libpcap0.8-dev
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install

On Fedora, CentOS or RHEL, you can install httpry with yum as follows. On CentOS/RHEL, enableEPEL repo before running yum.

$ sudo yum install httpry

If you still want to build httpry from the source on RPM-based systems, you can easily do that by:

$ sudo yum install gcc make git libpcap-devel
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install

Basic Usage of httpry

The basic use case of httpry is as follows.

$ sudo httpry -i <network-interface>

httpry then listens on a specified network interface, and displays captured HTTP requests/responses in real time.

In most cases, however, you will be swamped with the fast scrolling output as packets are coming in and out. So you want to save captured HTTP packets for offline analysis. For that, use either ‘-b’ or ‘-o’ options. The ‘-b’ option allows you to save raw HTTP packets into a binary file as is, which then can be replayed with httpry later. On the other hand, ‘-o’ option saves human-readable output ofhttpry into a text file.

To save raw HTTP packets into a binary file:

$ sudo httpry -i eth0 -b output.dump

To replay saved HTTP packets:

$ httpry -r output.dump

Note that when you read a dump file with ‘-r’ option, you don’t need root privilege.

To save httpry‘s output to a text file:

$ sudo httpry -i eth0 -o output.txt

Advanced Usage of httpry

If you want to monitor only specific HTTP methods (e.g., GET, POST, PUT, HEAD, CONNECT, etc), use ‘-m’ option:

$ sudo httpry -i eth0 -m get,head

If you downloaded httpry‘s source code, you will notice that the source code comes with a collection of Perl scripts which aid in analyzing httpry‘s output. These scripts are found in httpry/scripts/plugins directory. If you want to write a custom parser for httpry‘s output, these scripts can be good examples to start from. Some of their capabilities are:

  • hostnames: Display a list of unique host names with counts.
  • find_proxies: Detect web proxies.
  • search_terms: Find and count search terms entered in search services.
  • content_analysis: Find URIs which contain specific keywords.
  • xml_output: Convert output into XML format.
  • log_summary: Generate a summary of log.
  • db_dump: Dump log file data into a MySQL database.

Before using these scripts, first run httpry with ‘-o’ option for some time. Once you obtained the output file, run the scripts on it at once by using this command:

$ cd httpry/scripts
$ perl parse_log.pl -d ./plugins <httpry-output-file>

You may encounter warnings with several plugins. For example, db_dump plugin may fail if you haven’t set up a MySQL database with DBI interface. If a plugin fails to initialize, it will automatically be disabled. So you can ignore those warnings.

After parse_log.pl is completed, you will see a number of analysis results (*.txt/xml) in httpry/scripts directory. For example, log_summary.txt looks like the following.

To conclude, httpry can be a life saver if you are in a situation where you need to interpret live HTTP packets. That might not be so common for average Linux users, but it never hurts to be prepared. What do you think of this tool?

Source: Xmodulo

How to analyze and view Apache web server logs interactively on Linux

17/03/2016 Comments off

analyze apache logsWhether you are in the web hosting business, or run a few web sites on a VPS yourself, chances are you want to display visitor statistics such as top visitors, requested files (dynamic or static), used bandwidth, client browsers, and referring sites, and so forth.

GoAccess is a command-line log analyzer and interactive viewer for Apache or Nginx web server. With this tool, you will not only be able to browse the data mentioned earlier, but also parse the web server logs to dig for further data as well – and all of this within a terminal window in real time. Since as of today most web servers use either a Debian derivative or a Red Hat based distribution as the underlying operating system, I will show you how to install and use GoAccess in Debian and CentOS.

Installing GoAccess on Linux

In Debian, Ubuntu and derivatives, run the following command to install GoAccess:

# aptitude install goaccess

In CentOS, you’ll need to enable the EPEL repository and then:

# yum install goaccess

In Fedora, simply use yum command:

# yum install goaccess

If you want to install GoAccess from the source to enable further options (such as GeoIP location), install required dependencies for your operating system, and then follow these steps:

# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install

That will install version 0.8.5, but you can always verify what is the latest version in the Downloads page of the project’s web site.

Since GoAccess does not require any further configurations, once it’s installed you are ready to go.

Running GoAccess

To start using GoAccess, just run it against your Apache access log.

For Debian and derivatives:

# goaccess -f /var/log/apache2/access.log

For Red Hat based distros:

# goaccess -f /var/log/httpd/access_log

When you first launch GoAccess, you will be presented with the following screen to choose the date and log format. As explained, you can toggle between options using the spacebar and proceed with F10. As for the date and log formats, you may want to refer to the Apache documentation if you need to refresh your memory.

In this case, Choose Common Log Format (CLF):

15868350373_30c16d7c30

and then press F10. You will be presented with the statistics screen. For the sake of brevity, only the header, which shows the summary of the log file, is shown in the next image:

16486742901_7a35b5df69_b

Lire la suite…

How to measure packets per second or throughput on high speed network interface

17/03/2016 Comments off

throughput networkThere are many traffic monitoring tools available on Linux, which can monitor/classify network traffic, and report real-time traffic statistics in fancy user interfaces. Most of these tools (e.g., ntopng, iftop) are powered by libpcap, which is a packet capture library used to monitor network traffic in user space. Despite their versatility, however, libpcap-based network monitoring tools cannot scale to handle traffic on multi Gigabit rate network interfaces, due to the overhead associated with user-space packet capture.

In this tutorial, I will present simple shell scripts that can monitor network traffic on per-interface basis, without relying on slow libpcap library. These scripts are fast enough to support multi Gigabit rates, but only suitable if you are interested in « aggregate » network statistics on per interface basis.

The secret for the scripts lies in sysfs virtual filesystem which is used by the kernel to export device- or driver-related information to user space. Network interface related statistics are exported via /sys/class/net/<ethX>/statistics.

For example, the statistics on eth0 interface are found in these files:

  • /sys/class/net/eth0/statistics/rx_packets: number of packets received
  • /sys/class/net/eth0/statistics/tx_packets: number of packets transmitted
  • /sys/class/net/eth0/statistics/rx_bytes: number of bytes received
  • /sys/class/net/eth0/statistics/tx_bytes: number of bytes transmitted
  • /sys/class/net/eth0/statistics/rx_dropped: number of packets dropped while received
  • /sys/class/net/eth0/statistics/tx_dropped: number of packets dropped while transmitted

The numbers stored in the files are automatically refreshed in real-time by the kernel. Therefore, you can write scripts that calculate traffic statistics based on these files.

The following are two such scripts (thanks to joemiller). The first script counts the number of packets per second, received (RX) or sent (TX) on an interface, while the latter scripts measures the network bandwidth of incoming (RX) and outgoing (TX) traffic on an interface. For these scripts to work, you do not need to install anything.

Measure Packets per Second on an Interface

#!/bin/bash
INTERVAL="1"  # update interval in seconds
if [ -z "$1" ]; then
        echo
        echo usage: $0 [network-interface]
        echo
        echo e.g. $0 eth0
        echo
        echo shows packets-per-second
        exit
fi

IF=$1

while true
do
        R1=`cat /sys/class/net/$1/statistics/rx_packets`
        T1=`cat /sys/class/net/$1/statistics/tx_packets`
        sleep $INTERVAL
        R2=`cat /sys/class/net/$1/statistics/rx_packets`
        T2=`cat /sys/class/net/$1/statistics/tx_packets`
        TXPPS=`expr $T2 - $T1`
        RXPPS=`expr $R2 - $R1`
        echo "TX $1: $TXPPS pkts/s RX $1: $RXPPS pkts/s"
done
 

Measure Network Bandwidth on an Interface

#!/bin/bash
 
INTERVAL="1" # update interval in seconds
 
if [ -z "$1" ]; then
 echo
 echo usage: $0 [network-interface]
 echo
 echo e.g. $0 eth0
 echo
 exit
fi
 
IF=$1
 
while true
do
 R1=`cat /sys/class/net/$1/statistics/rx_bytes`
 T1=`cat /sys/class/net/$1/statistics/tx_bytes`
 sleep $INTERVAL
 R2=`cat /sys/class/net/$1/statistics/rx_bytes`
 T2=`cat /sys/class/net/$1/statistics/tx_bytes`
 TBPS=`expr $T2 - $T1`
 RBPS=`expr $R2 - $R1`
 TKBPS=`expr $TBPS / 1024`
 RKBPS=`expr $RBPS / 1024`
 echo "TX $1: $TKBPS kB/s RX $1: $RKBPS kB/s"
done
The following screenshot shows the above two scripts in action.

10700906313_cfa3cd1ed7_z

Source: Xmodulo 

What are good open-source log monitoring tools on Linux ?

10/03/2016 Comments off

open source monitoringIn an operating system, logs are all about keeping track of events, be it critical system errors, resource usage warnings, transaction history, application status, or user activities. These logs, which are stored as (text or binary) files in the system, are useful for system auditing, debugging and maintenance. However, with so many different system entities generating log files, and even at growing rate, the challenge as a system admin is to how to « consume » these log files effectively.

That’s when log monitoring tools come into the picture, which streamline the often laborious process of collecting, parsing and analyzing log files, as well as alerting system admins for any interesting events. These tools are designed from ground up focused on log monitoring, so they offer many attractive features, such as scalable log aggregation and filtering, human-readable display, event correlation, visual or email notification, flexible log retention policy, and so on.

In this post, I am going to introduce a list of popular open-source log monitoring software for Linux, ranging from simple log file viewers to full-blown log monitoring frameworks.

Log Aggregation and Filtering

Log monitoring would not be possible without efficient and scalable mechanisms to collect and pre-process log files. Tools in this category focus on shipping, collecting, filtering, indexing and storing log files, so that they can be further analyzed and visualized in subsequent monitoring pipelines.

1. rsyslog: an open-source log collector server which can filter and consolidate log data (based on syslog protocol) from different hosts and devices in the network. rsyslog can be configured as a server or a client, where the former plays the role of a log collector and the latter runs as a log sender.

2. syslog-ng: another open-source implementation of the syslog protocol with more advanced and user-friendly features such as content-based filtering, easier-to-understand config format, and real-time event correlation.

3. systemd journal: systemd journal can be configured for remote journal logging, where locally logged events are forwarded to a remote server over HTTP. In this setup, systemd-journal-upload on a client host serializes and forwards journal messages to systemd-journal-remote running on a remote collector server.

4. logstash: an open-source tool that collects, parses, and stores log files for offline search and analysis. logstash can run in various pipelines due to many plugins supporting different input/output interfaces, decoding/encoding, and filtering rules. Input plugins allows logstash to gather log files from different sources and protocols (e.g., files, S3, RabbitMQ, syslog, collectd, TCP/UDP sockets). Filter/codec plugins are used to parse, convert, modify and add metadata to log files. Output plugins pass processed log files to various target storages (e.g., file, Google cloud storage, Nagios, S3, Zabbix).

5. collectd: a daemon service which gathers various system-level statistics, and stores them for historical analysis or real-time graphing. Similar to logstash, collectd is an extensible architecture, where you can enable various input/output plugins to change its collection behaviors. For log collection, collectd can leverage LogFile and Network plugins to aggregate remote log files.

6. Logster: an open-source utility for parsing log files for any interesting data, and aggregating extracted data into metrics for subsequent reporting and graphing pipelines.

7. Fluentd: a unified log aggregation layer which allows in-stream processing for a variety of streaming data and log files. It comes with a huge plugin ecosystem with more than 300 plugins to support various input sources and output interfaces.

8. Nxlog: a unified log collector and forwarder which supports a variety of log sources, formats and protocols. Advanced features include multi-threaded log collection and processing, message buffering and prioritization, built-in log rotation, and TLS/SSL transport.

9. Scribe:: a scalable log collector server developed by Facebook. Scribe can aggregate log data which is streamed in real time from a large number of clients. It uses Apache Thrift for protocol encoding, so its interface is compatible with pretty much any languages. While a proven solution, Scribe is not something you can deploy quickly as a turnkey. Also, note that Scribe is no longer updated and maintained.

10. Flume: a highly scalable and reliable service to transport and collect large volumes of streaming log data from any clients, and store them in backend storage such as Apache Hadoops’ HDFS.

Lire la suite…

Monitor TCP Traffic on specific port

30/11/2015 un commentaire

Source: superuser.com

I’ve searched quite extensively for this, but cannot seem to come up with a working example.

My objective is to monitor TCP traffic on a specific port to see incoming connections and write them to a text file. The catch is I also need a timestamp on each row to show exactly when the client connected down to the second.

I’ve already exhausted netstat, nmap, and tcptrack, but none support timestamp.

I was thinking a linux shell script might work if I monitored a specific local port and wrote text to a file when a connection is made then just concatenate the date on each line.

I was playing with this:

netstat -ano|grep 443|grep ESTABLISHED

as well as this:

tcptrack -i eth0 port 443

but neither suit my needs as I need the time the connection comes in at.

Categories: Réseau Tags:

Nmon – A nifty little tool to monitor system resources on Linux

26/11/2015 Comments off

Source: binarytides.com

Nmon

Nmon (Nigel’s performance Monitor for Linux) is another very useful command line utility that can display information about various system resources like cpu, memory, disk, network etc. It was developed at IBM and later released open source.

It is available for most common architectures like x86, ARM and platforms like linux, unix etc. It is interactive and the output is well organised similar to htop.

Using Nmon it is possible to view the performance of different system resources on a single screen.
The man page describes nmon as

nmon is is a systems administrator, tuner, benchmark tool. It can display the CPU, memory, network, disks (mini graphs or numbers), file systems, NFS, top processes, resources (Linux version & processors) and on Power micro-partition information.

Project website
http://nmon.sourceforge.net/

Install Nmon

Debian/Ubuntu type distros have nmon in the default repos, so grab it with apt.

$ sudo apt-get install nmon

Fedora users can get it with yum

$ sudo yum install nmon

CentOS users need to install nmon from rpmforge/repoforge repository. It is not present in Epel.
Either download the correct rpm installer from

http://pkgs.repoforge.org/nmon/

Or setup the rpmforge repository by following the instructions here
http://wiki.centos.org/AdditionalResources/Repositories/RPMForge

And then install using yum

$ sudo yum install nmon

Lire la suite…

Categories: Système Tags: ,

Glances gives a quick overview of system usage on Linux

26/11/2015 Comments off

Monitor your Linux system

glances system linuxAs a Linux sysadmin it feels great power when monitoring system resources like cpu, memory on the commandline. To peek inside the system is a good habit here atleast, because that’s one way of driving your Linux system safe. Plenty of tools like Htop, Nmon, Collectl, top and iotop etc help you accomplish the task. Today lets try another tool called Glances.

Glances

Glances is a tool similar to Nmon that has a very compact display to provide a complete overview of different system resources on just a single screen area. It does not support any complex functionality but just gives a brief overview CPU, Load, Memory, Network rate, Disk IO, file system, process number and details.

As a bonus, glances is actually cross platform, which means you can use it on obsolete OSes like windows :P.

Here’s a quick glimpse of it.

glances-linux

The output is color highlighted. Green indicates optimum levels of usage whereas red indicates that the particular resource is under heavy use.

$ glances -v
Glances version 1.6 with PsUtil 0.6.1

Project homepage https://github.com/nicolargo/glances http://nicolargo.github.io/glances/

Lire la suite…

Trafic monitor small solution for Linux

24/11/2015 Comments off

Source: Trafic monitor small solution for Linux

TRAFIPgraph

The software is really small and fast to install. Was designed to work mostly with iptables and on Linux platform.

Installation is easy. Just add those lines to your firewall or put somewhere to start allways.
After this modification the collect.sh script with the result from your iptables -L -n. And of course put the .php files somewhere to access via www and make the directory writeable. The output file must be in the directory where are the php files. By default without selecting anything will show last hour traffic. It’s pretty live(update at 6 seconds, not like other programs).

Quick example:

[root@lair trafip]# iptables -A OUTPUT -s 0.0.0.0/0 -d 127.0.0.1
[root@lair trafip]# iptables -A INPUT -d 0.0.0.0/0 -s 127.0.0.1
[root@lair trafip]# iptables -L -n|grep 127|grep -v ACCEPT|grep -v LOG|grep -v DROP
all -- 127.0.0.1 0.0.0.0/0
all -- 0.0.0.0/0 127.0.0.1

Get the strings « 127.0.0.1 0.0.0.0/0 » and « 0.0.0.0/0 127.0.0.1 » and put in collect.sh. Must be exact like iptables shows (better you copy paste that part). The script collect.sh must be always running to count.

In img.php modify:

$target variable with the name where you redirect the output from collect.sh (ex: $target="local";)
$ip variable with the IP (ex: $ip="127.0.0.1";)
$maxspeed variable with the maximum traffic can be done in 6 seconds (ex: $ip="115200";). If you have black lines on your graph without stopping the interface/traffic then increase the value.
$upload variable with red or green (ex: $upload="red";)
$download variable with red or green (ex: $download="green";)
$imagetype variable with png, gif or jpg, if for output format of graph (ex: $imagetype="gif")

The output file must be something like:

11/12/02 05:57:26 10782702 149477806
11/12/02 05:57:32 10783170 149489806
11/12/02 05:57:38 10783810 149509426

(format: month/day/year[space]hour:minutte:second[space]INPUT_counter[space]OUTPUT_counter