Archive

Archives pour 03/2016

Mass-blocking IP addresses with ipset

30/03/2016 Comments off

Using ipset to block many IP addresses

I was sponsoring an upload of ipset to Debian the other day. This reminded me of ipset, a very cool program as I am going to show. It makes administering related netfilter (that is: firewall) rules easy along with a good performance. This is achieved by changing how rules match in iptables. Traditionally, an iptables rule matches a single network identity, for example a single IP address or a single network only. With ipsets you can operate on a bunch of (otherwise unrelated) addresses at once easily. If you happen to need bulk actions in your firewall, for example you want to blacklist a long list of IP addresses at once, you will love IP sets. I promise.

Drawbacks of netfilter

IP sets do exist for a longer time, however they made it into the upstream Linux kernel as of version 2.6.39. That means Debian Wheezy will support it as is; for Debian Squeeze you can either use a backported kernel or compile yourself the module shipped in the ipset-source package. In any case you additionally need the command line utilities named ipset. Thus, install that package before you can start. Having that said, Squeeze users should note the ipset syntax I am demonstrating below slightly differs from the syntax supported by the Squeeze utilities. The big picture remains the same, though.

IP utils do not conflict with iptables, but extend it in a useful way. You can combine both as you like. In fact, you still need iptables to turn IP sets into something really useful. Nonetheless you will be hitting iptables‘ limitation soon if you exceed a certain number of rules. You can combine as many conditions within a filter rule as you like, however you can only specify a single pattern for each condition. You figure, this does not scale very well if a pattern to match against does not follow a very tight definition such as a CIDR pattern.

This means you can happily filter whole network blocks such as 192.168.0.0/24 (which translates to 255 hosts) in iptables, but there is no way to specify a particular not specially connected set of IP addresses within this range if it cannot be expressed with a CIDR prefix. For example, there is no way to block, say, 192.168.0.5, 192.168.0.100 and 192.168.0.220 in a single statement only. You really need to declare three rules which only differ by the IP address. Pretend, you want to prevent these three addresses from accessing your host. You would probably do something like this:

iptables -A INPUT -s 192.168.0.5 -p TCP -j REJECT
iptables -A INPUT -s 192.168.0.100 -p TCP -j REJECT
iptables -A INPUT -s 192.168.0.220 -p TCP -j REJECT

Alternatively you could do

iptables -A INPUT -s 192.168.0.0/24 -p TCP -j REJECT

but this would block 251 unrelated hosts, too. Not a good deal I’d say. Now, while the former alternative is annoying, what’s the problem with it? The problem is: It does not scale. Netfilter rules work like a fall-through trapdoor. This means whenever a packet enters the system, it passes through several chains and in each of these chains, netfilter checks all rules until either one rule matches or there are no rules left. In the latter case the default action applies. In netfilter terminology a chain determines when an interception to the regular packet flow occurs. In the example above the specified chain is INPUT, which applies to any incoming packet.

In the end this means every single packet which is sent to your host needs to be checked whether it matches the patterns specified in every rule of yours. And believe me, in a server setup, there are lots of packets flowing to your machine. Consider for example a single HTTP requests, which requires at very least four packets sent from a client machine to your web server. Thus, each of these packets is passing through your rules in your INPUT chain as a bare minimum, before it is eventually accepted or discarded.

This requires a substantial computing overhead for every rule you are adding to your system. This isn’t so much of a problem if you haven’t many rules (for some values of “many” as I am going to show). However, you may end up in a situation where you end up with a large rule set. For example, if you suffer from a DDoS attack, you may be tempted to block drone clients in your firewall (German; Apache2. Likewise: for Lighttpd). In such a situation you will need to add thousands of rules easily.

Being under attack, the performance of your server is poor already. However, by adding many rules to your firewall you are actually further increasing computing overhead for every request significantly. To illustrate my point, I’ve made some benchmarks. Below you find the response times of a HTTP web server while doing sequential requests for a single file of 10 KiB in size. I am explaining my measurement method in detail further below. For now, look the graph. It shows the average response time of an Apache 2 web server, divided into four parts:

  • connect time: this is the time passed by until the server completed the initial TCP handshake
  • send time: this is the time passed by which I needed to reliably send a HTTP request over the established TCP connection reliably (that means: one packet sent and waiting for acknowledged by the server)
  • first byte: this is time passed by until the server sent the first byte from the corresponding HTTP response
  • response complete: this is time passed by until the server sent all of the remaining bytes of the corresponding HTTP response (remaining HTTP header + 10 KiB of payload)

ipset2

Lire la suite…

MySQL: Setting Account Resource Limits

30/03/2016 Comments off

mysql account resource limitsOne means of restricting client use of MySQL server resources is to set the global max_user_connections system variable to a nonzero value. This limits the number of simultaneous connections that can be made by any given account, but places no limits on what a client can do once connected. In addition, settingmax_user_connections does not enable management of individual accounts. Both types of control are of interest to MySQL administrators.

To address such concerns, MySQL permits limits for individual accounts on use of these server resources:

  • The number of queries an account can issue per hour
  • The number of updates an account can issue per hour
  • The number of times an account can connect to the server per hour
  • The number of simultaneous connections to the server by an account

Any statement that a client can issue counts against the query limit, unless its results are served from the query cache. Only statements that modify databases or tables count against the update limit.

An account in this context corresponds to a row in the mysql.user table. That is, a connection is assessed against the User and Host values in the user table row that applies to the connection. For example, an account 'usera'@'%.example.com' corresponds to a row in the user table that has User and Host values of usera and %.example.com, to permit usera to connect from any host in the example.com domain. In this case, the server applies resource limits in this row collectively to all connections by usera from any host in the example.com domain because all such connections use the same account.

Before MySQL 5.0.3, an account was assessed against the actual host from which a user connects. This older method of accounting may be selected by starting the server with the --old-style-user-limits option. In this case, if usera connects simultaneously from host1.example.com andhost2.example.com, the server applies the account resource limits separately to each connection. If usera connects again from host1.example.com, the server applies the limits for that connection together with the existing connection from that host.

To establish resource limits for an account at account-creation time, use the CREATE USER statement. To modify the limits for an existing account, use ALTER USER. (Before MySQL 5.7.6, use GRANT, for new or existing accounts.) Provide a WITH clause that names each resource to be limited. The default value for each limit is zero (no limit). For example, to create a new account that can access the customer database, but only in a limited fashion, issue these statements:

mysql> CREATE USER 'francis'@'localhost' IDENTIFIED BY 'frank'
    ->     WITH MAX_QUERIES_PER_HOUR 20
    ->          MAX_UPDATES_PER_HOUR 10
    ->          MAX_CONNECTIONS_PER_HOUR 5
    ->          MAX_USER_CONNECTIONS 2;

The limit types need not all be named in the WITH clause, but those named can be present in any order. The value for each per-hour limit should be an integer representing a count per hour. For MAX_USER_CONNECTIONS, the limit is an integer representing the maximum number of simultaneous connections by the account. If this limit is set to zero, the global max_user_connections system variable value determines the number of simultaneous connections. Ifmax_user_connections is also zero, there is no limit for the account.

Lire la suite…

Collect & visualize your logs with Logstash, Elasticsearch & Redis

22/03/2016 Comments off

Source: michael.bouvy.net

Update of December 6th : although Logstash does the job as a log shipper, you might consider replacing it with Lumberjack / Logstash Forwarder, which needs way less resources, and keep Logstash on your indexer to collect, transform and index your logs data (into ElasticSearch) : check out my latest blog post on the topic.

Kibana_medium

Kibana Dashboard


Even if you manage a single Linux server, you probably already know how hard it is to keep an eye on what’s going on with your server, and especially tracking logs data. And this becomes even worse when you have several (physical or virtual) servers to administrate.

 

Although Munin is very helpful monitoring various informations from my servers / VMs, I felt the need of something more, and bit less static / more interactive.

There are 3 kind of logs I especially wanted to track :

  • Apache 2 access logs
  • iptables logs
  • Syslogs

After searching arround on the internet for a great tool that would help me, I read about the open source log management tool Logstash which seems to perfectly suit a (major) part of my needs : logs collecting / processing.

For the purpose of this post, I will take the following network architecture and assume and I want to collect my Apache, iptables, system logs from servers 1/2/3 (“shippers”) on server 4 (“indexer”) and visualize them :

logstach-archi1

As you can see, I am using 4 complementary applications, the role of each one being :

  • Logstash : logs collector, processor and shipper (to Redis) on log “shippers” 1-3 ; logs indexer on server 4 (reads from Redis, writes to Elasticsearch)
  • Redis : logs data broker, receiving data from log “shippers” 1-3
  • Elasticsearch : logs data persistent storage
  • Kibana : (time-based) logs data visualization (graphs, tables, etc.)

Lire la suite…

Categories: Système Tags: , , ,

How to secure SSH login with one-time passwords on Linux

22/03/2016 Comments off

As someone says, security is a not a product, but a process. While SSH protocol itself is cryptographically secure by design, someone can wreak havoc on your SSH service if it is not administered properly, be it weak passwords, compromised keys or outdated SSH client.

As far as SSH authentication is concerned, public key authentication is in general considered more secure than password authentication. However, key authentication is actually not desirable or even less secure if you are logging in from a public or shared computer, where things like stealth keylogger or memory scraper can always a possibility. If you cannot trust the local computer, it is better to use something else. This is when « one-time passwords » come in handy. As the name implies, each one-time password is for single-use only. Such disposable passwords can be safely used in untrusted environments as they cannot be re-used even when they are stolen.

One way to generate disposable passwords is via Google Authenticator. In this tutorial, I am going to demonstrate another way to create one-time passwords for SSH login: OTPW, a one-time password login package. Unlike Google Authenticator, you do not rely on any third party for one-time password generation and verification.

What is OTPW?

OTPW consists of one-time password generator and PAM-integrated verification routines. In OTPW, one-time passwords are generated apriori with the generator, and carried by a user securely (e.g., printed in a paper sheet). Cryptographic hash of the generated passwords are then stored in the SSH server host. When a user logs in with a one-time password, OTPW’s PAM module verifies the password, and invalidates it to prevent re-use.

Step One: Install and Configure OTPW on Linux

Debian, Ubuntu or Linux Mint:

Install OTPW packages with aptget.

$ sudo apt-get install libpam-otpw otpw-bin

Open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).

#@include common-auth

and add the following two lines (to enable one-time password authentication):

auth       required     pam_otpw.so
session    optional     pam_otpw.so

16775121360_d1f93feefa_b

Fedora or CentOS/RHEL:

OTPW is not available as a prebuilt package on Red Hat based systems. So let’s install OTPW by building it from the source.

First, install prerequites:

$ sudo yum git gcc pam-devel
$ git clone https://www.cl.cam.ac.uk/~mgk25/git/otpw
$ cd otpw

Open Makefile with a text editor, and edit a line that starts with « PAMLIB= » as follows.

On 64-bit system:

PAMLIB=/usr/lib64/security

On 32-bit system:

PAMLIB=/usr/lib/security

Compile and install it. Note that installation will automatically restart an SSH server. So be ready to be disconnected if you are on an SSH connection.

$ make
$ sudo make install

Now you need to update SELinux policy since /usr/sbin/sshd tries to write to user’s home directory, which is not allowed by default SELinux policy. The following commands will do. If you are not using SELinux, skip this step.

$ sudo grep sshd /var/log/audit/audit.log | audit2allow -M mypol
$ sudo semodule -i mypol.pp

Next, open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).

#auth       substack     password-auth

and add the following two lines (to enable one-time password authentication):

auth       required     pam_otpw.so
session    optional     pam_otpw.so

Lire la suite…

How to run iptables automatically after reboot on Debian

22/03/2016 Comments off

reboot iptablesIf you have customized iptables rules, and would like to load the customized iptables rules persistently across reboots on Debian, you can leverage if-up.d scripts that are located in /etc/network/if-up.d. On Debian, any script that is marked as executable and placed in /etc/network/if-up.d gets executed when a network interface is brought up.

In order to run iptables automatically after reboot on Debian, do the following.

First, customize iptables as you wish, and then save the current iptables rule-set using iptables-save command.

$ sudo iptables-save > /etc/firewall.conf

The above command will dump the current iptables rule set into /etc/firewall.conf file which iptables-restore command can later use to restore the same rule set.

Now create the following if-up.d script called iptables that restores the saved iptables rule set.

$ sudo vi /etc/network/if-up.d/iptables
#!/bin/sh
iptables-restore < /etc/firewall.conf
$ sudo chmod  x /etc/network/if-up.d/iptables

Alternatively, you can add « iptables-restore < /etc/firewall.conf » command to /etc/rc.local, which gets executed at the end of system boot-up.

Source: Xmodulo