Archive

Articles taggués ‘performance’

ERROR 1040…again

08/12/2019 Comments off

A pretty common topic in Support tickets is the rather infamous error: ERROR 1040: Too many connections. The issue is pretty self-explanatory: your application/users are trying to create more connections than the server allows, or in other words, the current number of connections exceeds the value of the max_connections variable.

This situation on its own is already a problem for your end-users, but when on top of that you are not able to access the server to diagnose and correct the root cause, then you have a really big problem; most times you will need to terminate the instance and restart it to recover.

Root user can’t connect either! Why!?

In a properly set up environment, a user with SUPER privilege will be able to access the instance and diagnose the error 1040 problem that is causing connection starvation, as explained in the manual:

mysqld actually permits max_connections + 1 client connections. The extra connection is reserved for use by accounts that have the SUPER privilege. By granting the privilege to administrators and not to normal users (who should not need it), an administrator who also has the PROCESS privilege can connect to the server and use SHOW PROCESSLIST to diagnose problems even if the maximum number of unprivileged clients are connected.

But we see lots of people who give SUPER privileges to their application or script users, either due to application requirements (dan Lire la suite…

Optimiser les performances de MySQL

06/12/2019 Comments off

Source: Optimiser les performances de MySQL (Quentin Busuttil)

Les bases de données SQL et plus particulièrement MySQL restent une des pierres angulaires de l’immense majorité des sites internet. MySQL fonctionne très bien out of the box, cependant, dès que la base se trouve assez sollicitée, on s’aperçoit que les réglages par défaut méritent une petite optimisation. Jetons un œil à tout ça !

Récemment, un des serveurs de Peerus commençait à avoir de sacrés pics de load lors des heures de forte charge, avec MySQL pour principal responsable. Une grosse centaine de connexions simultanées et quelques dizaines de requêtes par seconde sur des bases de plusieurs dizaines de giga. C’est un peu plus que le WordPress moyen, mais rien d’ingérable pour un MySQL bien réglé.

Dès lors, avant d’imaginer prendre un serveur plus puissant, sharder les tables ou je ne sais quoi encore, il faut tirer le maximum de notre cher SGBDR. Lire la suite…

MySQL: Setting Account Resource Limits

14/04/2019 Comments off

mysql account resource limitsOne means of restricting client use of MySQL server resources is to set the global max_user_connections system variable to a nonzero value. This limits the number of simultaneous connections that can be made by any given account, but places no limits on what a client can do once connected. In addition, settingmax_user_connections does not enable management of individual accounts. Both types of control are of interest to MySQL administrators.

To address such concerns, MySQL permits limits for individual accounts on use of these server resources:

  • The number of queries an account can issue per hour
  • The number of updates an account can issue per hour
  • The number of times an account can connect to the server per hour
  • The number of simultaneous connections to the server by an account

Any statement that a client can issue counts against the query limit, unless its results are served from the query cache. Only statements that modify databases or tables count against the update limit.

An account in this context corresponds to a row in the mysql.user table. That is, a connection is assessed against the User and Host values in the user table row that applies to the connection. For example, an account 'usera'@'%.example.com' corresponds to a row in the user table that has User and Host values of usera and %.example.com, to permit usera to connect from any host in the example.com domain. In this case, the server applies resource limits in this row collectively to all connections by usera from any host in the example.com domain because all such connections use the same account.

Before MySQL 5.0.3, an account was assessed against the actual host from which a user connects. This older method of accounting may be selected by starting the server with the --old-style-user-limits option. In this case, if usera connects simultaneously from host1.example.com andhost2.example.com, the server applies the account resource limits separately to each connection. If usera connects again from host1.example.com, the server applies the limits for that connection together with the existing connection from that host.

To establish resource limits for an account at account-creation time, use the CREATE USER statement. To modify the limits for an existing account, use ALTER USER. (Before MySQL 5.7.6, use GRANT, for new or existing accounts.) Provide a WITH clause that names each resource to be limited. The default value for each limit is zero (no limit). For example, to create a new account that can access the customer database, but only in a limited fashion, issue these statements:

mysql> CREATE USER 'francis'@'localhost' IDENTIFIED BY 'frank'
    ->     WITH MAX_QUERIES_PER_HOUR 20
    ->          MAX_UPDATES_PER_HOUR 10
    ->          MAX_CONNECTIONS_PER_HOUR 5
    ->          MAX_USER_CONNECTIONS 2;

The limit types need not all be named in the WITH clause, but those named can be present in any order. The value for each per-hour limit should be an integer representing a count per hour. For MAX_USER_CONNECTIONS, the limit is an integer representing the maximum number of simultaneous connections by the account. If this limit is set to zero, the global max_user_connections system variable value determines the number of simultaneous connections. Ifmax_user_connections is also zero, there is no limit for the account.

Lire la suite…

How to measure packets per second or throughput on high speed network interface

10/04/2019 Comments off

throughput networkThere are many traffic monitoring tools available on Linux, which can monitor/classify network traffic, and report real-time traffic statistics in fancy user interfaces. Most of these tools (e.g., ntopng, iftop) are powered by libpcap, which is a packet capture library used to monitor network traffic in user space. Despite their versatility, however, libpcap-based network monitoring tools cannot scale to handle traffic on multi Gigabit rate network interfaces, due to the overhead associated with user-space packet capture.

In this tutorial, I will present simple shell scripts that can monitor network traffic on per-interface basis, without relying on slow libpcap library. These scripts are fast enough to support multi Gigabit rates, but only suitable if you are interested in « aggregate » network statistics on per interface basis.

The secret for the scripts lies in sysfs virtual filesystem which is used by the kernel to export device- or driver-related information to user space. Network interface related statistics are exported via /sys/class/net/<ethX>/statistics.

For example, the statistics on eth0 interface are found in these files:

  • /sys/class/net/eth0/statistics/rx_packets: number of packets received
  • /sys/class/net/eth0/statistics/tx_packets: number of packets transmitted
  • /sys/class/net/eth0/statistics/rx_bytes: number of bytes received
  • /sys/class/net/eth0/statistics/tx_bytes: number of bytes transmitted
  • /sys/class/net/eth0/statistics/rx_dropped: number of packets dropped while received
  • /sys/class/net/eth0/statistics/tx_dropped: number of packets dropped while transmitted

The numbers stored in the files are automatically refreshed in real-time by the kernel. Therefore, you can write scripts that calculate traffic statistics based on these files.

The following are two such scripts (thanks to joemiller). The first script counts the number of packets per second, received (RX) or sent (TX) on an interface, while the latter scripts measures the network bandwidth of incoming (RX) and outgoing (TX) traffic on an interface. For these scripts to work, you do not need to install anything.

Measure Packets per Second on an Interface

#!/bin/bash
INTERVAL="1"  # update interval in seconds
if [ -z "$1" ]; then
        echo
        echo usage: $0 [network-interface]
        echo
        echo e.g. $0 eth0
        echo
        echo shows packets-per-second
        exit
fi

IF=$1

while true
do
        R1=`cat /sys/class/net/$1/statistics/rx_packets`
        T1=`cat /sys/class/net/$1/statistics/tx_packets`
        sleep $INTERVAL
        R2=`cat /sys/class/net/$1/statistics/rx_packets`
        T2=`cat /sys/class/net/$1/statistics/tx_packets`
        TXPPS=`expr $T2 - $T1`
        RXPPS=`expr $R2 - $R1`
        echo "TX $1: $TXPPS pkts/s RX $1: $RXPPS pkts/s"
done
 

Measure Network Bandwidth on an Interface

#!/bin/bash
 
INTERVAL="1" # update interval in seconds
 
if [ -z "$1" ]; then
 echo
 echo usage: $0 [network-interface]
 echo
 echo e.g. $0 eth0
 echo
 exit
fi
 
IF=$1
 
while true
do
 R1=`cat /sys/class/net/$1/statistics/rx_bytes`
 T1=`cat /sys/class/net/$1/statistics/tx_bytes`
 sleep $INTERVAL
 R2=`cat /sys/class/net/$1/statistics/rx_bytes`
 T2=`cat /sys/class/net/$1/statistics/tx_bytes`
 TBPS=`expr $T2 - $T1`
 RBPS=`expr $R2 - $R1`
 TKBPS=`expr $TBPS / 1024`
 RKBPS=`expr $RBPS / 1024`
 echo "TX $1: $TKBPS kB/s RX $1: $RKBPS kB/s"
done
The following screenshot shows the above two scripts in action.

10700906313_cfa3cd1ed7_z

Source: Xmodulo 

How To Configure Redis Caching to Speed Up WordPress on Ubuntu 14.04

25/03/2019 Comments off

Introduction

redisRedis is an open-source key value store that can operate as both an in-memory store and as cache. Redis is a data structure server that can be used as a database server on its own, or paired with a relational database like MySQL to speed things up, as we’re doing in this tutorial.

For this tutorial, Redis will be configured as a cache for WordPress to alleviate the redundant and time-consuming database queries used to render a WordPress page. The result is a WordPress site which is much faster, uses less database resources, and provides a tunable persistent cache. This guide applies to Ubuntu 14.04.

While every site is different, below is an example benchmark of a default WordPress installation home page with and without Redis, as configured from this guide. Chrome developer tools were used to test with browser caching disabled.

Default WordPress home page without Redis:

804ms page load time

Default WordPress home page with Redis:

449ms page load time

Note: This implementation of Redis caching for WordPress relies on a well-commented but third-party script. The script is hosted on DigitalOcean’s asset server, but was developed externally. If you would like to make your own implementation of Redis caching for WordPress, you will need to do some more work based on the concepts presented here.

Redis vs. Memcached

Memcached is also a popular cache choice. However, at this point, Redis does everything Memcached can do, with a much larger feature set. This Stack Overflow page has some general information as an overview or introduction to persons new to Redis.

How does the caching work?

The first time a WordPress page is loaded, a database query is performed on the server. Redis remembers, or caches, this query. So, when another user loads the WordPress page, the results are provided from Redis and from memory without needing to query the database.

The Redis implementation used in this guide works as a persistent object cache for WordPress (no expiration). An object cache works by caching the SQL queries in memory which are needed to load a WordPress page.

When a page loads, the resulting SQL query results are provided from memory by Redis, so the query does not have to hit the database. The result is much faster page load times, and less server impact on database resources. If a query is not available in Redis, the database provides the result and Redis adds the result to its cache.

If a value is updated in the database (for example, a new post or page is created in WordPress) the Redis value for that query is invalidated to prevent bad cached data from being presented.

If you run into problems with caching, the Redis cache can be purged by using the flushall command from the Redis command line:

redis-cli

Once you see the prompt, type:

flushall

Additional Reference: WordPress Object Cache Documentation

Prerequisites

Before starting this guide, you’ll need to set up a sudo user and install WordPress.

  • Ubuntu 14.04 Droplet (1 GB or higher recommended)
  • Add a sudo user
  • Install WordPress. This guide has been tested with these instructions, although there are many ways to install WordPress

Lire la suite…

Categories: Logiciel Tags: ,