Archive

Articles taggués ‘commandes’

Scripts shell de sauvegarde

21/06/2017 Aucun commentaire

Une des façons les plus simples de sauvegarder un système utilise un script shell. Par exemple, un script peut être utilisé pour configurer les répertoires à sauvegarder et transmettre ces répertoires comme arguments à l’utilitaire tar, ce qui crée un fichier d’archive. Le fichier d’archive peut ensuite être déplacé ou copié dans un autre emplacement. L’archive peut également être créée sur un système de fichiers distant tel qu’un montage NFS.

L’utilitaire tar crée un fichier d’archive de plusieurs fichiers ou répertoires. tar peut également filtrer les fichiers par le biais des utilitaires de compression, réduisant ainsi la taille du fichier d’archive.

Categories: Système Tags: , , ,

8 Practical Examples of Linux Xargs Command for Beginners

27/03/2017 Comments off

The Linux xargs command may not be a hugely popular command line tool, but this doesn’t take away the fact that it’s extremely useful, especially when combined with other commands like findand grep. If you are new to xargs, and want to understand its usage, you’ll be glad to know that’s exactly what we’ll be doing here.

Before we proceed, please keep in mind that all the examples presented in this tutorial have been tested on Ubuntu 14.04 LTS. Shell used is Bash, and version is 4.3.11.

1. How Xargs command works?

Well, before jumping onto its usage, it’s important to understand what exactly Xargs does. In layman’s terms, the tool – in its most basic form – reads data from standard input (stdin) and executes the command (supplied to it as argument) one or more times based on the input read. Any blanks and spaces in input are treated as delimiters, while blank lines are ignored. 

 

If no command is supplied as argument to xargs, the default command that the tool executes is echo. For example, in the following example, I just executed ‘xargs’ and entered ‘Hello World’ on stdin. As I pressed Ctrl+D (to tell xargs that we’re done with the input), the echocommand was automatically executed, and ‘Hello World’ was printed again.

How xargs command works

2. How to use xargs with another command?

While echo is the default command xargs executes, you can explicitly specify any other command. For example, you can pass the find command along with its ‘-name’ option as argument to xargs, and then pass the name of the file (or type of files) you want find to search as input through stdin.

Here’s the complete command in question:

xargs find -name

For example, we provided « *.txt » in input through stdin, which means we want the find command to search all .txt files in the current directory (as well as its subdirectories).

Here’s the command in action:

Combine xargs with other commands

Lire la suite…

Categories: Système, Tutoriel Tags: , ,

Administration Linux Avancée : commandes utiles

27/03/2017 Comments off

I► Dans cet article, nous allons aborder les commandes d’administration Linux avancé :

=> mkfifo et script permettent d’initialiser un fichier de type pipe et de récupérer à distance les commandes saisies – exemple :

Un user saisit mkfifo NOmDuFichier (tube), puis démarre le script pour enregistrer les commandes dans ce tube : script -f tube

Un second user (par SSH ou en local) saisit cat tube : le script se met en marche , tout ce qui est saisi par le 1er user est visualisé par le second user :

Capture

Le premier user peut mettre fin au scripr en saisissant CTRL+D

=> logger permet d’écrire directement dans le fichier de log principal  :

Capture

=> Write permet d’écrire directement sur le terminal d’un utilisateur connecté :

Capture

=> WALL permet d’écrire sur tous les terminaux ouverts 

wall “fermez vos pc, c’est les vacances !!!”

 

Source: Michel Bocciolesi

Categories: Système Tags: ,

The role of shells in the Linux environment

12/03/2017 Comments off

Shell is used for various purposes under Linux. Linux user environment is made of the following components:

  • Kernel – The core of Linux operating system.
  • Shell – Provides an interface between the user and the kernel.
  • Terminal emulator – The xterm program is a terminal emulator for the X Window System. It allows user to enter commands and display back their results on screen. 
  • Linux Desktop and Windows Manager – Linux desktop is collection of various software apps. It includes the file manger, the windows manager, the Terminal emulator and much more. KDE and Gnome are two examples of the complete desktop environment in Linux.

Login

User can login locally into the console when in runlevel # 3 or graphically when in runlevel # 5 (the level numbers may differ depending on the distribution). In both cases you need to provide username and password. Bash uses the following initialization and start-up files:

  1. /etc/profile – The systemwide initialization file, executed for login shells.
  2. /etc/bash.bashrc – The systemwide per-interactive-shell startup file. This is a non-standard file which may not exist on your distribution. Even if it exists, it will not be sourced unless it is done explicitly in another start-up file.
  3. /etc/bash.logout – The systemwide login shell cleanup file, executed when a login shell exits.
  4. $HOME/.bash_profile – The personal initialization file, executed for login shells.
  5. $HOME/.bashrc – The individual per-interactive-shell startup file.
  6. $HOME/.bash_logout – The individual login shell cleanup file, executed when a login shell exits.
  7. $HOME/.inputrc – Individual readline initialization file.

Bash Startup Scripts

Script of commands executed at login to set up environment. For example, setup JAVA_HOME path.

Login Shell

Login shells are first shell started when you log in to the system. Login shells set environment which is exported to non-login shells. Login shell calls the following when a user logs in:

Non-Login Shell

Bash Logout Scripts

  • When a login shell exits, bash reads and executes commands from the file $HOME/.bash_logout, if it exists.

Source: Cybercitiz

Categories: Système Tags: , ,

Five reliable Windows Server backup solutions

06/07/2016 Comments off

When disaster strikes (and it will), you can put the pieces back together if you’ve got a solid backup solution in place.

If you have a Windows server and you’re not backing it up, you are tempting fate. It’s only a matter of time before something tragic will happen: a hardware failure, a hacker, or a corrupted Windows update. Something will bring that server down and do everything it can to take your data with it.

Luckily, several reliable backup solutions are available for the Windows Server environment. They aren’t perfect. But the five solutions discussed here have their strengths and can be depended upon to keep your data safe. Now, before I reveal the big five, I want to make sure you understand how critical it is to be backing up that data to an external drive and to rotate that drive offsite. If you’re backing up drives and leaving everything onsite, should a fire (or some other catastrophe) occur, that data will be lost.

Note: This list is also available as a photo gallery.

1: Acronis Backup and Restore (with Universal Restore)

Acronis Backup and Restore (Figure A) is the backup tool I use the most. It’s costly but well worth it. The newer version of Acronis does a great job of ensuring that there is room on your external drive for your latest backup. You can overwrite the old backup or go by the traditional cleanup rules established in Acronis Echo. But if you go the Acronis route, you should also purchase the Universal Restore feature. With this tool, you can take a full image of a backup and restore it to any hardware — so long as it meets the requirements of your platform. That way, you don’t need an exact duplicate of your server hardware sitting around waiting to have an image restored onto it. Acronis also has add-ons for Exchange and SQL Servers.

Figure A

 

Acronis backup
Acronis Backup and Restore

2: Symantec Backup Exec

Symantec Backup Exec (Figure B) is a powerful backup solution that does a great job of « just working. » And with its at-a-glance calendar, it’s easy to know when your backup has succeeded and when it has worked. If you’re using Symantec Backup Exec across a network (to back up clients/hosts), you can feel safe because it uses SSL for all network backup connections. Like Acronis, Symantec offers solutions for Exchange and SQL. Also, the ability to manage Vaults (backup destinations) is superb with Symantec. Just make sure that if a Vault is physically (and permanently) removed, you remove it from within the backup software. Otherwise, it will cause problems.

Figure B

Symantec Backup Exec

Symantec Backup Exec

3: Carbonite

Carbonite (Figure C) is primarily an online backup (although the Business and BusinessPremier plans allow you to back up to an external drive). It’s meant to back up data only. Carbonite is probably one of the most reliable online backup tools and can back up email, documents, POS files, financial data, pictures and other multimedia, and more. But to use Carbonite for Windows Server, you must use the BusinessPremier Plan ($229.00 per year), which will allow you to include as many servers running Window Server 2003/2008/2011 as you like. Unlike Acronis, you won’t be backing up an image of your server with Carbonite — it’s a data-only backup tool. But it’s an excellent addition to any onsite backup solution.

Figure C

Carbonite

Carbonite

4: Windows NT Backup

Windows NT Backup (Figure D) is the free backup solution that comes with any Windows Server. Why do I include it here? Although it just does data backup and doesn’t offer many of the bells and whistles of the other tools, it is one of the only truly reliable backup solutions that will clear the Exchange logs for you. I always set up an NT Backup to run once a week (or month, depending upon the needs), just to have it clear the Exchange logs. Otherwise, you’ll be including a lot of unnecessary data in your backups, causing their size to get out of hand.

Figure D

Windows NT Backup

Windows NT Backup

5: ShadowProtect 4 (Server)

ShadowProtect 4 (Figure E) is probably the least-known product in this list. But it falls into place next to Acronis as a feature-rich tool that can do everything — including recover to dissimilar hardware. ShadowProtect 4 also includes an easy-to-use tool that allows you to consolidate backup image files and convert the backups to either .VHD or .VMDK virtual machines. Another outstanding feature is its granular recovery, which allows for the quick recovery of individual files from within a backup. Unique to ShadowProtect is the Virtual Boot technology, which provides for automatic failover to a virtual server.

Figure E

ShadowProtect

ShadowProtect 4

Which backup?

Every company needs a backup plan. How far you go with your backup depends upon the size of the company, your needs, and your budget. But any of the above solutions will serve as a great place to start — or stop — for your Windows Server backup needs. Do you have a different backup solution to add to the list? Share your recommendations with fellow TechRepublic members.

 
Source: Jack Wallen – TechRepublic

Ubuntu Check RAM Memory Chip Speed and Specification From Within a Linux System

22/06/2016 Comments off

I want to add more RAM to my server running Ubuntu Linux. How do I find out my current RAM chip information such as its speed, type and manufacturer name within a Linux system without opening the case?

You need to use the dmidecode command which is a tool for dumping a computer’s DMI (some say SMBIOS) table contents in a human-readable format. This table contains a description of the system’s hardware components (such as RAM), as well as other useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can retrieve hardware information without having to probe for the actual hardware. Open a command-line terminal (select Applications > Accessories > Terminal), and then type:

$ sudo dmidecode --type memory

OR

# dmidecode --type memory | less

OR

$ sudo dmidecode --type 17

Sample outputs:

# dmidecode 2.10
SMBIOS version fixup (2.51 -> 2.6).
SMBIOS 2.6 present.
Handle 0x0011, DMI type 16, 15 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: None
	Maximum Capacity: 4 GB
	Error Information Handle: Not Provided
	Number Of Devices: 4
Handle 0x0012, DMI type 17, 27 bytes
Memory Device
	Array Handle: 0x0011
	Error Information Handle: No Error
	Total Width: 72 bits
	Data Width: 64 bits
	Size: 2048 MB
	Form Factor: DIMM
	Set: 1
	Locator: DIMM#1A
	Bank Locator: Bank 1
	Type: DDR2
	Type Detail: Synchronous
	Speed: 667 MHz
	Manufacturer: Not Specified
	Serial Number: Not Specified
	Asset Tag: Not Specified
	Part Number: Not Specified
Handle 0x0013, DMI type 17, 27 bytes
Memory Device
	Array Handle: 0x0011
	Error Information Handle: No Error
	Total Width: 72 bits
	Data Width: 64 bits
	Size: 2048 MB
	Form Factor: DIMM
	Set: 1
	Locator: DIMM#2A
	Bank Locator: Bank 2
	Type: DDR2
	Type Detail: Synchronous
	Speed: 667 MHz
	Manufacturer: Not Specified
	Serial Number: Not Specified
	Asset Tag: Not Specified
	Part Number: Not Specified
Handle 0x0014, DMI type 17, 27 bytes
Memory Device
	Array Handle: 0x0011
	Error Information Handle: No Error
	Total Width: 72 bits
	Data Width: 64 bits
	Size: 2048 MB
	Form Factor: DIMM
	Set: 1
	Locator: DIMM#1B
	Bank Locator: Bank 1
	Type: DDR2
	Type Detail: Synchronous
	Speed: 667 MHz
	Manufacturer: Not Specified
	Serial Number: Not Specified
	Asset Tag: Not Specified
	Part Number: Not Specified
Handle 0x0015, DMI type 17, 27 bytes
Memory Device
	Array Handle: 0x0011
	Error Information Handle: No Error
	Total Width: 72 bits
	Data Width: 64 bits
	Size: 2048 MB
	Form Factor: DIMM
	Set: 1
	Locator: DIMM#2B
	Bank Locator: Bank 2
	Type: DDR2
	Type Detail: Synchronous
	Speed: 667 MHz
	Manufacturer: Not Specified
	Serial Number: Not Specified
	Asset Tag: Not Specified
	Part Number: Not Specified

Track Multiple Files Simultaneously With MultiTail

07/06/2016 Comments off

https://www.dbsysnet.com/wp-content/uploads/2016/06/multitail-gnome-terminal.pngThe tail utility is one of the most useful tools an admin has — but it’s also a bit limited and dated. For watching two or more logs at once, and much more, you want MultiTail.

The tail utility is one of the most useful admin tools, but it’s limited and outdated. Bring your log-watching capabilities into the 21st century and view multiple logs at one time with MultiTail.

What’s MultiTail? It’s an ncurses utility that can display multiple files using « windows » (much like GNU Screen or Tmux) in a terminal or at the console. It also supports color highlighting, filtering and much more.

To get MultiTail, head over to the download page or see if your operating system already has packages. On Debian-based systems, you should need to look for only the multitail package. The project released an update (5.2.8) on April 14, so the most recent release probably won’t be in your upstream package repo just yet.

Once it’s installed, run multitail and hit F1. It will pop up a help menu with all of its keybindings. You’ll need to scroll down (use the down arrow key) to see all of the commands. It can be deceptive, otherwise, and it looks like you just have a few commands. Let’s look at a couple of the commands you’ll want to start with.

First, run t — this displays the stats for your instance of MultiTail. To add files, use the a command.

If you want to start multitail with a file or output of a command to access, use multitailfilename or multitail filename1 filename2 for more than one file. Use multitail -R 3 -l "command" -R 3 -l "command2" to see two commands displayed in one window.

Using the h command within MultiTail, you can set the height of each window. If you want to search through a window, use / or shift / to highlight the search string. Using I will toggle case sensitivity.

When you’ve used the search feature, you’ll get a buffer that displays in a « window » above the file. You can write this to a file using s, which will bring up a dialog that prompts for a filename to which to save.

If you get a MultiTail session configured just right you can save it for later using w from within the session. It will prompt you for a filename to which to save the script.

Basically, you can do just about anything you’d be able to do with tail and then some. It’s very interactive, and much of its commands have dialogs that will walk you through creating regular expressions or re-arranging windows and more.

The only caveat I have is that, occasionally, MultiTail is a bit crashy. Not wholly unreliable, but I have managed to crash MultiTail a few times while putting it through its paces. (I’ve never managed to crashtail…). But it’s still an invaluable tool to have around for any Linux or UNIX admin.

Joe ‘Zonker’ Brockmeier is a freelance writer and editor with more than 10 years covering IT. Formerly the openSUSE Community Manager for Novell, Brockmeier has written for Linux Magazine, Sys Admin, Linux Pro Magazine, IBM developerWorks, Linux.com, CIO.com, Linux Weekly News, ZDNet, and many other publications. You can reach Zonker at jzb@zonker.net and follow him on Twitter.

Source: ServerWatch

How to sniff HTTP traffic from the command line on Linux

23/05/2016 Comments off

sniff HTTP trafficSuppose you want to sniff live HTTP web traffic (i.e., HTTP requests and responses) on the wire for some reason. For example, you may be testing experimental features of a web server. Or you may be debugging a web application or a RESTful service. Or you may be trying to troubleshoot PAC (proxy auto config) or check for any malware files surreptitiously downloaded from a website. Whatever the reason is, there are cases where HTTP traffic sniffing is helpful, for system admins, developers, or even end users.

While packet sniffing tools such as tcpdump are popularly used for live packet dump, you need to set up proper filtering to capture only HTTP traffic, and even then, their raw output typically cannot be interpreted at the HTTP protocol level so easily. Real-time web server log parsers such as ngxtopprovide human-readable real-time web traffic traces, but only applicable with a full access to live web server logs.

What will be nice is to have tcpdump-like sniffing tool, but targeting HTTP traffic only. In fact, httpryis extactly that: HTTP packet sniffing tool. httpry captures live HTTP packets on the wire, and displays their content at the HTTP protocol level in a human-readable format. In this tutorial, let’s see how we can sniff HTTP traffic with httpry.

Install httpry on Linux

On Debian-based systems (Ubuntu or Linux Mint), httpry is not available in base repositories. So build it from the source:

$ sudo apt-get install gcc make git libpcap0.8-dev
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install

On Fedora, CentOS or RHEL, you can install httpry with yum as follows. On CentOS/RHEL, enableEPEL repo before running yum.

$ sudo yum install httpry

If you still want to build httpry from the source on RPM-based systems, you can easily do that by:

$ sudo yum install gcc make git libpcap-devel
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install

Basic Usage of httpry

The basic use case of httpry is as follows.

$ sudo httpry -i <network-interface>

httpry then listens on a specified network interface, and displays captured HTTP requests/responses in real time.

In most cases, however, you will be swamped with the fast scrolling output as packets are coming in and out. So you want to save captured HTTP packets for offline analysis. For that, use either ‘-b’ or ‘-o’ options. The ‘-b’ option allows you to save raw HTTP packets into a binary file as is, which then can be replayed with httpry later. On the other hand, ‘-o’ option saves human-readable output ofhttpry into a text file.

To save raw HTTP packets into a binary file:

$ sudo httpry -i eth0 -b output.dump

To replay saved HTTP packets:

$ httpry -r output.dump

Note that when you read a dump file with ‘-r’ option, you don’t need root privilege.

To save httpry‘s output to a text file:

$ sudo httpry -i eth0 -o output.txt

Advanced Usage of httpry

If you want to monitor only specific HTTP methods (e.g., GET, POST, PUT, HEAD, CONNECT, etc), use ‘-m’ option:

$ sudo httpry -i eth0 -m get,head

If you downloaded httpry‘s source code, you will notice that the source code comes with a collection of Perl scripts which aid in analyzing httpry‘s output. These scripts are found in httpry/scripts/plugins directory. If you want to write a custom parser for httpry‘s output, these scripts can be good examples to start from. Some of their capabilities are:

  • hostnames: Display a list of unique host names with counts.
  • find_proxies: Detect web proxies.
  • search_terms: Find and count search terms entered in search services.
  • content_analysis: Find URIs which contain specific keywords.
  • xml_output: Convert output into XML format.
  • log_summary: Generate a summary of log.
  • db_dump: Dump log file data into a MySQL database.

Before using these scripts, first run httpry with ‘-o’ option for some time. Once you obtained the output file, run the scripts on it at once by using this command:

$ cd httpry/scripts
$ perl parse_log.pl -d ./plugins <httpry-output-file>

You may encounter warnings with several plugins. For example, db_dump plugin may fail if you haven’t set up a MySQL database with DBI interface. If a plugin fails to initialize, it will automatically be disabled. So you can ignore those warnings.

After parse_log.pl is completed, you will see a number of analysis results (*.txt/xml) in httpry/scripts directory. For example, log_summary.txt looks like the following.

To conclude, httpry can be a life saver if you are in a situation where you need to interpret live HTTP packets. That might not be so common for average Linux users, but it never hurts to be prepared. What do you think of this tool?

Source: Xmodulo

What are useful CLI tools for Linux system admins ?

17/03/2016 Comments off

System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

14853747084_e14cf18e8f_z

This article will present some of the most popular and useful CLI tools recommended for sysadmins in their day to day activities. If you would like to recommend any useful tool which is not listed here, don’t forget to share it in the comment section.

Network Tools

1. ping: Check end-to-end connectivity (RTT delay, jitter, packet loss) of a remote host with ICMP echo/reply. Useful to check system status and reachability.

2. hping: Network scanning and testing tool that can generate ICMP/TCP/UDP ping packets. Often used for advanced port scanning, firewall testing, manual path MTU discovery and fragmentation testing.

3. traceroute: Discover a layer-3 forwarding path from a local host to a remote destination host with TTL-limited ICMP/UDP/TCP probe packets. Useful to troubleshoot network reachability and routing problems.

4. mtr: A variation of traceroute which characterizes per-hop packet loss/jitter with running statistics. Useful to characterize routing path delays.

5. netcat/socat: A swiss army knife of TCP/IP networking, allowing to read/write byte streams over TCP/UDP. Useful to troubleshoot firewall policies and service availability.

6. dig: DNS troubleshooting tool that can generate forward queries, reverse queries, find authoritative name servers, check CNAME, MX and other DNS records. Can be instructed to query a specific DNS server of your choosing.

7. nslookup: Another DNS checking/troubleshooting tool. Works with all DNS queries and records. Can query a particular DNS server.

8. dnsyo: A DNS testing tool which checks DNS propagation by performing DNS lookup from over a number of open resolvers located across 1,500 different networks around the world.

9. lsof: Show information about files (e.g., regular files, pipes or sockets) which are opened by processes. Useful to monitor processes or users in terms of their open network connections or opened files.

10. iftop: A ncurses-based TUI utility that can be used to monitor in real time bandwidth utilization and network connections for individual network interfaces. Useful to keep track of bandwidth hogging applications, users, destinations and ports.

11. netstat: A network statistics utility that can show status information and statistics about open network connections (TCP/UDP ports, IP addresses), routing tables, TX/RX traffic and protocols. Useful for network related diagnosis and performance tuning.

12. tcpdump: A popular packet sniffer tool based on libpcap packet capture library. Can define packet capturing filters in Berkeley Packet Filters format.

13. tshark: Another CLI packet sniffer software with full compatibility with its GUI counterpart, Wireshark. Supports 1,000 protocols and the list is growing. Useful to troubleshoot, analyze and store information on live packets.

14. ip: A versatile CLI networking tool which is part of iproute2 package. Used to check and modifying routing tables, network device state, and IP tunneling settings. Useful to view routing tables, add/remove static routes, configure network interfaces, and otherwise troubleshoot routing issues.

15. ifup/ifdown: Used to bring up or shut down a particular network interface. Often a preferred alternative to restarting the entire network service.

16. autossh: A program which creates an SSH session and automatically restarts the session should it disconnect. Often useful to create a persistent reverse SSH tunnel across restrictive corporate networks.

17. iperf: A network testing tool which measures maximum bi-directional throughput between a pair of hosts by injecting customizable TCP/UDP data streams in between.

18. elinks/lynx: text-based web browsers for CLI-based server environment.

Lire la suite…

16 commands to check hardware information on Linux

09/03/2016 Comments off

Source: BinaryTides

Hardware information

Like for every thing, there are plenty of commands to check information about the hardware of your linux system. Some commands report only specific hardware components like cpu or memory while the rest cover multiple hardware units.

This post takes a quick look at some of the most commonly used commands to check information and configuration details about various hardware peripherals and devices. The list includes lscpu, hwinfo, lshw, dmidecode, lspci etc.

1. lscpu

The lscpu command reports information about the cpu and processing units. It does not have any further options or functionality.

$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 23
Stepping:              10
CPU MHz:               1998.000
BogoMIPS:              5302.48
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              2048K
NUMA node0 CPU(s):     0-3

2. lshw – List Hardware

A general purpose utility, that reports detailed and brief information about multiple different hardware units such as cpu, memory, disk, usb controllers, network adapters etc. Lshw extracts the information from different /proc files.

$ sudo lshw -short

H/W path        Device      Class       Description
===================================================
                            system      ()
/0                          bus         DG35EC
/0/0                        processor   Intel(R) Core(TM)2 Quad CPU    Q8400  @ 2.66GHz
/0/0/1                      memory      2MiB L2 cache
/0/0/3                      memory      32KiB L1 cache
/0/2                        memory      32KiB L1 cache
/0/4                        memory      64KiB BIOS
/0/14                       memory      8GiB System Memory
/0/14/0                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/14/1                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/14/2                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/14/3                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/100                      bridge      82G35 Express DRAM Controller
/0/100/2                    display     82G35 Express Integrated Graphics Controller
/0/100/2.1                  display     82G35 Express Integrated Graphics Controller
/0/100/19       eth0        network     82566DC Gigabit Network Connection
/0/100/1a                   bus         82801H (ICH8 Family) USB UHCI Controller #4
/0/100/1a.1                 bus         82801H (ICH8 Family) USB UHCI Controller #5
/0/100/1a.7                 bus         82801H (ICH8 Family) USB2 EHCI Controller #2
/0/100/1b                   multimedia  82801H (ICH8 Family) HD Audio Controller
/0/100/1c                   bridge      82801H (ICH8 Family) PCI Express Port 1
/0/100/1c.1                 bridge      82801H (ICH8 Family) PCI Express Port 2
/0/100/1c.2                 bridge      82801H (ICH8 Family) PCI Express Port 3
/0/100/1c.2/0               storage     JMB368 IDE controller
/0/100/1d                   bus         82801H (ICH8 Family) USB UHCI Controller #1
/0/100/1d.1                 bus         82801H (ICH8 Family) USB UHCI Controller #2
/0/100/1d.2                 bus         82801H (ICH8 Family) USB UHCI Controller #3
/0/100/1d.7                 bus         82801H (ICH8 Family) USB2 EHCI Controller #1
/0/100/1e                   bridge      82801 PCI Bridge
/0/100/1e/5                 bus         FW322/323 [TrueFire] 1394a Controller
/0/100/1f                   bridge      82801HB/HR (ICH8/R) LPC Interface Controller
/0/100/1f.2                 storage     82801H (ICH8 Family) 4 port SATA Controller [IDE mode]
/0/100/1f.3                 bus         82801H (ICH8 Family) SMBus Controller
/0/100/1f.5                 storage     82801HR/HO/HH (ICH8R/DO/DH) 2 port SATA Controller [IDE m
/0/1            scsi3       storage     
/0/1/0.0.0      /dev/sda    disk        500GB ST3500418AS
/0/1/0.0.0/1    /dev/sda1   volume      70GiB Windows NTFS volume
/0/1/0.0.0/2    /dev/sda2   volume      395GiB Extended partition
/0/1/0.0.0/2/5  /dev/sda5   volume      97GiB HPFS/NTFS partition
/0/1/0.0.0/2/6  /dev/sda6   volume      97GiB Linux filesystem partition
/0/1/0.0.0/2/7  /dev/sda7   volume      1952MiB Linux swap / Solaris partition
/0/1/0.0.0/2/8  /dev/sda8   volume      198GiB Linux filesystem partition
/0/3            scsi4       storage     
/0/3/0.0.0      /dev/cdrom  disk        DVD RW DRU-190A

Check out the following post to learn more about lshw

Get hardware information on Linux with lshw command

Lire la suite…

Categories: Système Tags: ,