Archive

Archives pour 03/2016

Mass-blocking IP addresses with ipset

30/03/2016 Comments off

Using ipset to block many IP addresses

I was sponsoring an upload of ipset to Debian the other day. This reminded me of ipset, a very cool program as I am going to show. It makes administering related netfilter (that is: firewall) rules easy along with a good performance. This is achieved by changing how rules match in iptables. Traditionally, an iptables rule matches a single network identity, for example a single IP address or a single network only. With ipsets you can operate on a bunch of (otherwise unrelated) addresses at once easily. If you happen to need bulk actions in your firewall, for example you want to blacklist a long list of IP addresses at once, you will love IP sets. I promise.

Drawbacks of netfilter

IP sets do exist for a longer time, however they made it into the upstream Linux kernel as of version 2.6.39. That means Debian Wheezy will support it as is; for Debian Squeeze you can either use a backported kernel or compile yourself the module shipped in the ipset-source package. In any case you additionally need the command line utilities named ipset. Thus, install that package before you can start. Having that said, Squeeze users should note the ipset syntax I am demonstrating below slightly differs from the syntax supported by the Squeeze utilities. The big picture remains the same, though.

IP utils do not conflict with iptables, but extend it in a useful way. You can combine both as you like. In fact, you still need iptables to turn IP sets into something really useful. Nonetheless you will be hitting iptables‘ limitation soon if you exceed a certain number of rules. You can combine as many conditions within a filter rule as you like, however you can only specify a single pattern for each condition. You figure, this does not scale very well if a pattern to match against does not follow a very tight definition such as a CIDR pattern.

This means you can happily filter whole network blocks such as 192.168.0.0/24 (which translates to 255 hosts) in iptables, but there is no way to specify a particular not specially connected set of IP addresses within this range if it cannot be expressed with a CIDR prefix. For example, there is no way to block, say, 192.168.0.5, 192.168.0.100 and 192.168.0.220 in a single statement only. You really need to declare three rules which only differ by the IP address. Pretend, you want to prevent these three addresses from accessing your host. You would probably do something like this:

iptables -A INPUT -s 192.168.0.5 -p TCP -j REJECT
iptables -A INPUT -s 192.168.0.100 -p TCP -j REJECT
iptables -A INPUT -s 192.168.0.220 -p TCP -j REJECT

Alternatively you could do

iptables -A INPUT -s 192.168.0.0/24 -p TCP -j REJECT

but this would block 251 unrelated hosts, too. Not a good deal I’d say. Now, while the former alternative is annoying, what’s the problem with it? The problem is: It does not scale. Netfilter rules work like a fall-through trapdoor. This means whenever a packet enters the system, it passes through several chains and in each of these chains, netfilter checks all rules until either one rule matches or there are no rules left. In the latter case the default action applies. In netfilter terminology a chain determines when an interception to the regular packet flow occurs. In the example above the specified chain is INPUT, which applies to any incoming packet.

In the end this means every single packet which is sent to your host needs to be checked whether it matches the patterns specified in every rule of yours. And believe me, in a server setup, there are lots of packets flowing to your machine. Consider for example a single HTTP requests, which requires at very least four packets sent from a client machine to your web server. Thus, each of these packets is passing through your rules in your INPUT chain as a bare minimum, before it is eventually accepted or discarded.

This requires a substantial computing overhead for every rule you are adding to your system. This isn’t so much of a problem if you haven’t many rules (for some values of “many” as I am going to show). However, you may end up in a situation where you end up with a large rule set. For example, if you suffer from a DDoS attack, you may be tempted to block drone clients in your firewall (German; Apache2. Likewise: for Lighttpd). In such a situation you will need to add thousands of rules easily.

Being under attack, the performance of your server is poor already. However, by adding many rules to your firewall you are actually further increasing computing overhead for every request significantly. To illustrate my point, I’ve made some benchmarks. Below you find the response times of a HTTP web server while doing sequential requests for a single file of 10 KiB in size. I am explaining my measurement method in detail further below. For now, look the graph. It shows the average response time of an Apache 2 web server, divided into four parts:

  • connect time: this is the time passed by until the server completed the initial TCP handshake
  • send time: this is the time passed by which I needed to reliably send a HTTP request over the established TCP connection reliably (that means: one packet sent and waiting for acknowledged by the server)
  • first byte: this is time passed by until the server sent the first byte from the corresponding HTTP response
  • response complete: this is time passed by until the server sent all of the remaining bytes of the corresponding HTTP response (remaining HTTP header + 10 KiB of payload)

ipset2

Lire la suite…

MySQL: Setting Account Resource Limits

30/03/2016 Comments off

mysql account resource limitsOne means of restricting client use of MySQL server resources is to set the global max_user_connections system variable to a nonzero value. This limits the number of simultaneous connections that can be made by any given account, but places no limits on what a client can do once connected. In addition, settingmax_user_connections does not enable management of individual accounts. Both types of control are of interest to MySQL administrators.

To address such concerns, MySQL permits limits for individual accounts on use of these server resources:

  • The number of queries an account can issue per hour
  • The number of updates an account can issue per hour
  • The number of times an account can connect to the server per hour
  • The number of simultaneous connections to the server by an account

Any statement that a client can issue counts against the query limit, unless its results are served from the query cache. Only statements that modify databases or tables count against the update limit.

An account in this context corresponds to a row in the mysql.user table. That is, a connection is assessed against the User and Host values in the user table row that applies to the connection. For example, an account 'usera'@'%.example.com' corresponds to a row in the user table that has User and Host values of usera and %.example.com, to permit usera to connect from any host in the example.com domain. In this case, the server applies resource limits in this row collectively to all connections by usera from any host in the example.com domain because all such connections use the same account.

Before MySQL 5.0.3, an account was assessed against the actual host from which a user connects. This older method of accounting may be selected by starting the server with the --old-style-user-limits option. In this case, if usera connects simultaneously from host1.example.com andhost2.example.com, the server applies the account resource limits separately to each connection. If usera connects again from host1.example.com, the server applies the limits for that connection together with the existing connection from that host.

To establish resource limits for an account at account-creation time, use the CREATE USER statement. To modify the limits for an existing account, use ALTER USER. (Before MySQL 5.7.6, use GRANT, for new or existing accounts.) Provide a WITH clause that names each resource to be limited. The default value for each limit is zero (no limit). For example, to create a new account that can access the customer database, but only in a limited fashion, issue these statements:

mysql> CREATE USER 'francis'@'localhost' IDENTIFIED BY 'frank'
    ->     WITH MAX_QUERIES_PER_HOUR 20
    ->          MAX_UPDATES_PER_HOUR 10
    ->          MAX_CONNECTIONS_PER_HOUR 5
    ->          MAX_USER_CONNECTIONS 2;

The limit types need not all be named in the WITH clause, but those named can be present in any order. The value for each per-hour limit should be an integer representing a count per hour. For MAX_USER_CONNECTIONS, the limit is an integer representing the maximum number of simultaneous connections by the account. If this limit is set to zero, the global max_user_connections system variable value determines the number of simultaneous connections. Ifmax_user_connections is also zero, there is no limit for the account.

Lire la suite…

Collect & visualize your logs with Logstash, Elasticsearch & Redis

22/03/2016 Comments off

Source: michael.bouvy.net

Update of December 6th : although Logstash does the job as a log shipper, you might consider replacing it with Lumberjack / Logstash Forwarder, which needs way less resources, and keep Logstash on your indexer to collect, transform and index your logs data (into ElasticSearch) : check out my latest blog post on the topic.

Kibana_medium

Kibana Dashboard


Even if you manage a single Linux server, you probably already know how hard it is to keep an eye on what’s going on with your server, and especially tracking logs data. And this becomes even worse when you have several (physical or virtual) servers to administrate.

 

Although Munin is very helpful monitoring various informations from my servers / VMs, I felt the need of something more, and bit less static / more interactive.

There are 3 kind of logs I especially wanted to track :

  • Apache 2 access logs
  • iptables logs
  • Syslogs

After searching arround on the internet for a great tool that would help me, I read about the open source log management tool Logstash which seems to perfectly suit a (major) part of my needs : logs collecting / processing.

For the purpose of this post, I will take the following network architecture and assume and I want to collect my Apache, iptables, system logs from servers 1/2/3 (“shippers”) on server 4 (“indexer”) and visualize them :

logstach-archi1

As you can see, I am using 4 complementary applications, the role of each one being :

  • Logstash : logs collector, processor and shipper (to Redis) on log “shippers” 1-3 ; logs indexer on server 4 (reads from Redis, writes to Elasticsearch)
  • Redis : logs data broker, receiving data from log “shippers” 1-3
  • Elasticsearch : logs data persistent storage
  • Kibana : (time-based) logs data visualization (graphs, tables, etc.)

Lire la suite…

Categories: Système Tags: , , ,

How to secure SSH login with one-time passwords on Linux

22/03/2016 Comments off

As someone says, security is a not a product, but a process. While SSH protocol itself is cryptographically secure by design, someone can wreak havoc on your SSH service if it is not administered properly, be it weak passwords, compromised keys or outdated SSH client.

As far as SSH authentication is concerned, public key authentication is in general considered more secure than password authentication. However, key authentication is actually not desirable or even less secure if you are logging in from a public or shared computer, where things like stealth keylogger or memory scraper can always a possibility. If you cannot trust the local computer, it is better to use something else. This is when « one-time passwords » come in handy. As the name implies, each one-time password is for single-use only. Such disposable passwords can be safely used in untrusted environments as they cannot be re-used even when they are stolen.

One way to generate disposable passwords is via Google Authenticator. In this tutorial, I am going to demonstrate another way to create one-time passwords for SSH login: OTPW, a one-time password login package. Unlike Google Authenticator, you do not rely on any third party for one-time password generation and verification.

What is OTPW?

OTPW consists of one-time password generator and PAM-integrated verification routines. In OTPW, one-time passwords are generated apriori with the generator, and carried by a user securely (e.g., printed in a paper sheet). Cryptographic hash of the generated passwords are then stored in the SSH server host. When a user logs in with a one-time password, OTPW’s PAM module verifies the password, and invalidates it to prevent re-use.

Step One: Install and Configure OTPW on Linux

Debian, Ubuntu or Linux Mint:

Install OTPW packages with aptget.

$ sudo apt-get install libpam-otpw otpw-bin

Open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).

#@include common-auth

and add the following two lines (to enable one-time password authentication):

auth       required     pam_otpw.so
session    optional     pam_otpw.so

16775121360_d1f93feefa_b

Fedora or CentOS/RHEL:

OTPW is not available as a prebuilt package on Red Hat based systems. So let’s install OTPW by building it from the source.

First, install prerequites:

$ sudo yum git gcc pam-devel
$ git clone https://www.cl.cam.ac.uk/~mgk25/git/otpw
$ cd otpw

Open Makefile with a text editor, and edit a line that starts with « PAMLIB= » as follows.

On 64-bit system:

PAMLIB=/usr/lib64/security

On 32-bit system:

PAMLIB=/usr/lib/security

Compile and install it. Note that installation will automatically restart an SSH server. So be ready to be disconnected if you are on an SSH connection.

$ make
$ sudo make install

Now you need to update SELinux policy since /usr/sbin/sshd tries to write to user’s home directory, which is not allowed by default SELinux policy. The following commands will do. If you are not using SELinux, skip this step.

$ sudo grep sshd /var/log/audit/audit.log | audit2allow -M mypol
$ sudo semodule -i mypol.pp

Next, open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).

#auth       substack     password-auth

and add the following two lines (to enable one-time password authentication):

auth       required     pam_otpw.so
session    optional     pam_otpw.so

Lire la suite…

How to run iptables automatically after reboot on Debian

22/03/2016 Comments off

reboot iptablesIf you have customized iptables rules, and would like to load the customized iptables rules persistently across reboots on Debian, you can leverage if-up.d scripts that are located in /etc/network/if-up.d. On Debian, any script that is marked as executable and placed in /etc/network/if-up.d gets executed when a network interface is brought up.

In order to run iptables automatically after reboot on Debian, do the following.

First, customize iptables as you wish, and then save the current iptables rule-set using iptables-save command.

$ sudo iptables-save > /etc/firewall.conf

The above command will dump the current iptables rule set into /etc/firewall.conf file which iptables-restore command can later use to restore the same rule set.

Now create the following if-up.d script called iptables that restores the saved iptables rule set.

$ sudo vi /etc/network/if-up.d/iptables
#!/bin/sh
iptables-restore < /etc/firewall.conf
$ sudo chmod  x /etc/network/if-up.d/iptables

Alternatively, you can add « iptables-restore < /etc/firewall.conf » command to /etc/rc.local, which gets executed at the end of system boot-up.

Source: Xmodulo

Sécuriser son serveur Linux

20/03/2016 Comments off

Sécuriser Linux

securiser linuxCe guide va vous apprendre à sécuriser un serveur et donc vous initier aux thématiques de la sécurité informatique. En quoi est-ce important ? Par définition, un serveur est ouvert sur le monde, un minimum de sécurité est donc intéressant afin de se prémunir des attaques les plus simplistes.

La marche à suivre sera donc la suivante :

  • présentation des failles ;

  • présentation des outils pour y pallier.

Bien entendu, je ne fais pas un cours complet de sécurité informatique, ce tutoriel est une initiation. Pour faire simple, c’est un peu comme fermer les volets la nuit.

Point de vue matériel, voilà ce qu’il vous faudra :

  • un serveur embarquant une distribution xBuntu ou Debian (pour les autres distribs, le principe sera le même, mais les commandes risquent d’être différentes) ;

  • un accès root (en ssh par exemple) et une console.

Tout le tutoriel se passe en ligne de commande afin de pouvoir être accessible à tous. Comme il faudra éditer des fichiers, veuillez vous assurer d’avoir un éditeur de fichier en ligne de commande (j’utiliserai personnellement nano qui est très simple).

C’est parti !

Filtrer le trafic via le firewall

Présentation

Le firewall (pare-feu en français) est l’élément indispensable pour sécuriser son serveur. Il va en effet filtrer tout le trafic en n’autorisant que les échanges permis par l’administrateur. Sans firewall correctement réglé, tous les trafics sont plus ou moins permis (c’est-à-dire qu’un attaquant peut faire ce qu’il veut chez vous) et ce genre de faille est détectable par un simple scan de ports.

Or, le noyau Linux offre déjà un pare-feu à l’utilisateur, qu’il est possible de configurer via le logiciel iptables (normalement contenu dans /sbin/iptables). S’il n’est pas installé :

apt-get install iptables

Nous allons maintenant détailler le fonctionnement d’un firewall – relativement simple. Un firewall analyse tout le trafic et vérifie si chaque paquet échangé respecte bien ses règles (critères de filtrage). Donc, il suffit de spécifier de bonnes règles pour interdire tout trafic superflu.

Les critères peuvent être divers (filtrer les ports, les protocoles, les adresses IP, etc). De base, nous allons spécifier nos règles sur les ports. Bien entendu, il faut être le plus strict possible quant au choix des règles ; c’est pourquoi, par défaut, tout firewall se règle en premier lieu en bloquant tout, absolument tout. Ensuite, nous allons « ouvrir » (autoriser le trafic) certains ports que nous voulons utiliser (par exemple pour un serveur web, nous allons ouvrir le port 80 afin que le site web soit accessible).

Déclaration des règles

Filtrage intégral

Suit la marche à suivre pour créer les règles :

1. Créons le script :

nano /etc/init.d/firewall

Et on y écrit : #!/bin/sh

2. On efface les règles précédentes pour partir sur de bonnes bases :

iptables -t filter -F
iptables -t filter -X

3. On bloque par défaut tout le trafic (si vous êtes en ssh, bien entendu, n’exécutez pas encore le script !) :

iptables -t filter -P INPUT DROP
iptables -t filter -P FORWARD DROP
iptables -t filter -P OUTPUT DROP

4. On ne ferme pas les connexions déjà établies :

iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

Nous indiquons avec les paramètres -m et --state de ne pas fermer les connexions qui sont déjà établies.

5. On autorise le loopback (on ne va pas se bloquer nous-mêmes !)

iptables -t filter -A INPUT -i lo -j ACCEPT
iptables -t filter -A OUTPUT -o lo -j ACCEPT

Note : lo signifie localhost (le serveur lui-même).

Tout est bloqué, il ne nous reste plus qu’à ouvrir les ports utilisés.

Lire la suite…

How to configure virtual hosts in Apache HTTP server

18/03/2016 Comments off

Source: Xmodulo

Virtual hosting refers to the technique that allows a physical server to host more than one website domain (e.g., site1.com, site2.com). Virtual hosting is prevalent in shared web hosting environments, where typically hundreds or more of websites or blogs are packed on a single dedicated server to amortize server maintenance cost.

You are not a web hosting company? Sure, virtual hosting can still be useful to you. For example, you can place multiple websites of yours on one VPS that you rent out, saving on your VPS cost. To serve multiple domains on a VPS, you just need to configure as many virtual hosts on its web server, and point the domains to the static IP address of your VPS.

Due to its usefulness, virtual hosting is supported by all modern web server software such as Apache, Nginx, Lighttpd, IIS. In this tutorial, I will demonstrate how to create and enable virtual hosts in Apache HTTP server under Linux environment. There is slight difference in the configuration between Debian-based and Red Hat-based systems. I will highlight the difference along the way.

Before I start, I assume that Apache HTTP server is already installed on your Linux server. If you haven’t, refer to our tutorials for Debian or Red Hat based systems, and install Apache server before proceeding.

As an exercise, let’s create a virtual host for domain abc.com on Apache web server.

Step One: Create Document Root Directory for Abc.com Domain

Start by creating a directory which will hold the web pages for abc.com. This directory is known as « document root » for the domain. Following the common practice, let’s organize all document root directories under /var/www, and name them after the corresponding domains. Also, create a dedicated log directory for abc.com under /var/log.

$ sudo mkdir /var/www/abc.com
$ sudo mkdir /var/log/apache2/abc.com (Debian, Ubuntu, Mint)
$ sudo mkdir /var/log/httpd/abc.com (Fedora, CentOS, RHEL)

Create a test webpage for the domain:

$ sudo vi /var/www/abc.com/index.html
<html>
  <head>
    <title>Welcome to Abc.com</title>
  </head>
  <body>
    <h1>Sample page</h1>
    This page is powered by Apache Virtual Host!
  </body>
</html>

Change the ownership of the document root directory to the user that Apache web server runs as.

On Debian, Ubuntu or Linux:

$ sudo chown -R www-data:www-data /var/www/abc.com

On Fedora, CentOS or RHEL:

$ sudo chown -R apache:apache /var/www/abc.com

Lire la suite…

Categories: Logiciel Tags: ,

How to live stream video from webcam on Linux

18/03/2016 Comments off

Source: Xmodulo

Have you ever wanted to live stream a video feed from your webcam for someone else? You could use any existing video chat software such as Skype, Google+ Hangouts. However, if you do not want to rely on any third-party streaming infrastructure, you can set up your own webcam streaming server in house.

In this tutorial, I will describe how to live stream video from webcam by using VLC. VLC is an open-source, cross platform media player which can handle virtually all video and audio formats. VLC can also run as a streaming server supporting various streaming protocol such as RTP, HTTP, RTSP, etc.

Before attempting to set up webcam streaming on your own, you need to understand the huge potential security risk of streaming a webcam over the Internet. Anyone can tap into your streaming feed unless you properly access control the streaming server, and encrypt the streaming traffic. I’ll briefly mention several security protections against threats towards the end of the tutorial.

Install VLC on Linux

Before setting up a streaming server with VLC, install VLC first.

To install VLC on Debian, Ubuntu or Linux Mint:

$ sudo apt-get install vlc

To install VLC on Fedora, first enable RPM Fusion’s free repository, and then run:

$ sudo yum install vlc

To install VLC on CentOS or RHEL 6, first set up EPEL repository, and then use the following commands:

$ cd /etc/yum.repos.d/
$ sudo wget http://pkgrepo.linuxtech.net/el6/release/linuxtech.repo
$ sudo yum install vlc

Verify Webcam Device on VLC

Next, verify whether a webcam device is successfully detected on your Linux system and VLC.

To do so, first find out the device name of your webcam with the following command. In this example, the device name of the webcam is /dev/video0.

$ ls /dev/video*
/dev/video0

Now use the following command to test video from your webcam. Replace « video0 » with your own device name.

$ vlc v4l2:///dev/video0

If your webcam is successfully detected by VLC, you should see a video stream of yourself.

Configure Webcam Streaming on VLC

Assuming that your webcam is successfully recognized by VLC, I am going to show how to configure webcam streaming. In this setup, webcam is streamed over HTTP in WMV format.

To configure VLC for webcam streaming, first launch VLC.

$ vlc

Choose « Streaming » from VLC menu.

9652347212_67126e2037

On the screen, choose webcam/audio device name (e.g., /dev/video0 for webcam, and hw:0,0 for audio). Click on the checkbox of « Show more options ». Make a note of value strings in « MRL » and « Edit Options » fields. We will use these strings later in the tutorial. Click on « Stream » button at the bottom.

Lire la suite…

Categories: Logiciel Tags:

What are useful CLI tools for Linux system admins ?

17/03/2016 Comments off

System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

14853747084_e14cf18e8f_z

This article will present some of the most popular and useful CLI tools recommended for sysadmins in their day to day activities. If you would like to recommend any useful tool which is not listed here, don’t forget to share it in the comment section.

Network Tools

1. ping: Check end-to-end connectivity (RTT delay, jitter, packet loss) of a remote host with ICMP echo/reply. Useful to check system status and reachability.

2. hping: Network scanning and testing tool that can generate ICMP/TCP/UDP ping packets. Often used for advanced port scanning, firewall testing, manual path MTU discovery and fragmentation testing.

3. traceroute: Discover a layer-3 forwarding path from a local host to a remote destination host with TTL-limited ICMP/UDP/TCP probe packets. Useful to troubleshoot network reachability and routing problems.

4. mtr: A variation of traceroute which characterizes per-hop packet loss/jitter with running statistics. Useful to characterize routing path delays.

5. netcat/socat: A swiss army knife of TCP/IP networking, allowing to read/write byte streams over TCP/UDP. Useful to troubleshoot firewall policies and service availability.

6. dig: DNS troubleshooting tool that can generate forward queries, reverse queries, find authoritative name servers, check CNAME, MX and other DNS records. Can be instructed to query a specific DNS server of your choosing.

7. nslookup: Another DNS checking/troubleshooting tool. Works with all DNS queries and records. Can query a particular DNS server.

8. dnsyo: A DNS testing tool which checks DNS propagation by performing DNS lookup from over a number of open resolvers located across 1,500 different networks around the world.

9. lsof: Show information about files (e.g., regular files, pipes or sockets) which are opened by processes. Useful to monitor processes or users in terms of their open network connections or opened files.

10. iftop: A ncurses-based TUI utility that can be used to monitor in real time bandwidth utilization and network connections for individual network interfaces. Useful to keep track of bandwidth hogging applications, users, destinations and ports.

11. netstat: A network statistics utility that can show status information and statistics about open network connections (TCP/UDP ports, IP addresses), routing tables, TX/RX traffic and protocols. Useful for network related diagnosis and performance tuning.

12. tcpdump: A popular packet sniffer tool based on libpcap packet capture library. Can define packet capturing filters in Berkeley Packet Filters format.

13. tshark: Another CLI packet sniffer software with full compatibility with its GUI counterpart, Wireshark. Supports 1,000 protocols and the list is growing. Useful to troubleshoot, analyze and store information on live packets.

14. ip: A versatile CLI networking tool which is part of iproute2 package. Used to check and modifying routing tables, network device state, and IP tunneling settings. Useful to view routing tables, add/remove static routes, configure network interfaces, and otherwise troubleshoot routing issues.

15. ifup/ifdown: Used to bring up or shut down a particular network interface. Often a preferred alternative to restarting the entire network service.

16. autossh: A program which creates an SSH session and automatically restarts the session should it disconnect. Often useful to create a persistent reverse SSH tunnel across restrictive corporate networks.

17. iperf: A network testing tool which measures maximum bi-directional throughput between a pair of hosts by injecting customizable TCP/UDP data streams in between.

18. elinks/lynx: text-based web browsers for CLI-based server environment.

Lire la suite…

How to analyze and view Apache web server logs interactively on Linux

17/03/2016 Comments off

analyze apache logsWhether you are in the web hosting business, or run a few web sites on a VPS yourself, chances are you want to display visitor statistics such as top visitors, requested files (dynamic or static), used bandwidth, client browsers, and referring sites, and so forth.

GoAccess is a command-line log analyzer and interactive viewer for Apache or Nginx web server. With this tool, you will not only be able to browse the data mentioned earlier, but also parse the web server logs to dig for further data as well – and all of this within a terminal window in real time. Since as of today most web servers use either a Debian derivative or a Red Hat based distribution as the underlying operating system, I will show you how to install and use GoAccess in Debian and CentOS.

Installing GoAccess on Linux

In Debian, Ubuntu and derivatives, run the following command to install GoAccess:

# aptitude install goaccess

In CentOS, you’ll need to enable the EPEL repository and then:

# yum install goaccess

In Fedora, simply use yum command:

# yum install goaccess

If you want to install GoAccess from the source to enable further options (such as GeoIP location), install required dependencies for your operating system, and then follow these steps:

# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install

That will install version 0.8.5, but you can always verify what is the latest version in the Downloads page of the project’s web site.

Since GoAccess does not require any further configurations, once it’s installed you are ready to go.

Running GoAccess

To start using GoAccess, just run it against your Apache access log.

For Debian and derivatives:

# goaccess -f /var/log/apache2/access.log

For Red Hat based distros:

# goaccess -f /var/log/httpd/access_log

When you first launch GoAccess, you will be presented with the following screen to choose the date and log format. As explained, you can toggle between options using the spacebar and proceed with F10. As for the date and log formats, you may want to refer to the Apache documentation if you need to refresh your memory.

In this case, Choose Common Log Format (CLF):

15868350373_30c16d7c30

and then press F10. You will be presented with the statistics screen. For the sake of brevity, only the header, which shows the summary of the log file, is shown in the next image:

16486742901_7a35b5df69_b

Lire la suite…