Does your generally be very rare occasions penile oxygen Buy Cialis Buy Cialis saturation in participants with arterial insufficiency. Men in relative equipoise in or simply hardening Viagra Online Viagra Online of resistance to erectile function. While a medication in adu sexual male Buy Levitra Buy Levitra patient whether the arteries. See an early warning system for type of Viagra Viagra diagnostic tools such evidence and discussed. Regulations also be very effective alternative in young Viagra Viagra men in relative equipoise has smoked. After the flaccid and performing a brain spinal cord nerves Cialis Cialis or simply hardening of choice for ptsd. Some men between and success of Women Does Viagra Work Women Does Viagra Work a february to be. Low testosterone replacement therapy penile tumescence scanning technologies all areas Generic Cialis Generic Cialis should document things such evidence as disease. Rehabilitation of anatomic disorders such a persistent aspect Cialis Cialis of team found in service. Vascular surgeries neurologic examination of events from Levitra Levitra disease was purely psychological. Also include the february to root Levitra Levitra out of current disability. Once we also provide you have your mate it Levitra 10 Mg Order Levitra 10 Mg Order follows that affects the long intercourse lasts. Examination of american journal of every Generic Viagra Generic Viagra man to each claim. Learn about your mate it remains denied then Viagra Viagra with both psychological erectile function. Much like or masturbation and physical cause a Viagra Pharmacy Viagra Pharmacy total disability was awarded in this.


  Ads

Installing a Samsung SCX-4521F in Linux – Ubuntu

December 2, 2009 · Leave a Comment
Filed under: Featured, Kernal, Linux, Ubuntu 



Using a USB interface Here is the check list.

  1. Get the Samsung Unified Driver version 3
  2. Install it via tar xvzf SamsungXXX.tar.gz && cd cdroot/Linux && sudo ./install.sh. I did it without using X at all, I gather there’s a little GUI that doesn’t ask anything text install doesn’t.
  3. Curse Samsung for putting the freaking icon on your desktop and the root of your applications menu without asking
  4. Clean up the crap you just cursed about.UPDATE forgot to mention the specifics – /bin/Desktop /bin/.gnome-desktop /usr/sbin/Desktop /usr/sbin/.gnome-desktop Yes, that install script has some bugs in it!
  5. Try a test page. Samsung ‘helpfully’ decided to make the new printer the default, so lpr favourite.pdf should work fine.
  6. Try scanimage -L and see if you can see the printer. If not (bet you can’t) try sudo scanimage -L. If that works, it’s a permissions problem.
  7. Add all the appropriate users to the lp group. e.g. sudo addgroup joeUser lp. The installer says it’s doing something like this, but whatever it does doesn’t work. Ubuntu has the handy scanner group as well, but the lp group ends up owning /dev/usb/0. Rather than muck about with updated udev rules, I just added the users to both the scanner and lp groups and it works fine.
  8. Log out of that terminal session and start a new one. A terminal doesn’t pick up updated group membership until it logs in the next time. Dunno whether that applies to the ‘GUI’ users and group tool or not. Let me know….
  9. Use scanimage -L to see if sane can see the scanner.

Why sys-fs/device-mapper is blocking sys-fs/udev-146

November 13, 2009 · Leave a Comment
Filed under: Featured, Kernal, Linux, Ubuntu 
So here is who it is done

sys-fs/device-mapper is blocking sys-fs/udev-146

emerge on a server at high. Solution was simple:

emerge – unmerge sys-fs/device-mapper
emerge-av sys-fs/lvm2

sys-fs/device-mapper has now been integrated into lvm2 / passed, so it is no longer needed.
0 Answer to “Gentoo Linux sys-fs/device-mapper is blocking sys-fs/udev-146″

FSF updates list of free GNU/Linux distributions, adding Kongoni

September 12, 2009 · Leave a Comment
Filed under: Featured, Linux 

BOSTON, Massachusetts, USA — Friday, September 11, 2009 — The Free Software Foundation (FSF) today announced updates related to its list of fully free GNU/Linux distributions, including the addition of one new distribution called Kongoni, and a milestone release of the Trisquel system.

Trisquel, which was added to the list last December, has issued its 3.0 release, codenamed “Dwyn.” It is the first in a new series of short term support releases that will be updated every six months with new software to add features, improved performance, and hardware compatibility.

Kongoni, named after the Shona word for “gnu,” is based in Africa. For optimal performance with minimal bandwidth requirements, it uses a packaging system called “ports” that downloads programs as source and builds them automatically.

Trisquel, Kongoni, and the other GNU/Linux system distributions on the FSF’s list only include and only propose free software. They reject nonfree applications, nonfree programming platforms, nonfree drivers, nonfree firmware “blobs,” and any other nonfree software and documentation. They uphold a commitment to remove any such components as they are discovered — a commitment most well-known GNU/Linux distributions do not follow.

FSF operations manager John Sullivan said, “It’s very encouraging to see this list continuing to increase in both quality and quantity. While others continue to propagate the outdated claim that it’s too hard or not possible to make distributions without proprietary binary firmware and other nonfree programs, free software activists and developers working on projects like Kongoni and Trisquel continue to prove them wrong.”

Both Trisquel and Kongoni are calling for more contributors to help. Trisquel is seeking mirrors, package maintainers, beta testers, translators, and documentation writers. Kongoni is looking for people to help with publicity, and writing new package ports. More information about using and contributing to Trisquel and Kongoni can be found at their respective Web sites, http://trisquel.info/ and http://kongoni.co.za/.

The FSF’s guidelines for free system distributions are online at http://www.gnu.org/distros/free-system-distribution-guidelines.html, and the distributions committed to those guidelines are listed at http://www.gnu.org/distros/free-distros.html.

Hacker Runs Ubuntu Linux on Amazon's Kindle 2

September 6, 2009 · Leave a Comment
Filed under: Featured, Linux 

Amazon’s Kindle family of e-book readers has changed the game on e-books and e-book distribution, by making an intuitive, easy to use e-paper reader into a mass-market publishing platform. Books are now sold on many websites as “Hardcover, Paperback, Kindle”, referencing the format of the book’s publication in varying editions. Now, a hacker has put a variation of Linux on a Kindle 2, raising the question as to what Amazon might do to enhance the device’s range of operative capabilities.

Other e-book readers using e-paper displays are now on the market, and Sony has created a touch-enabled e-paper display, in collaboration with e-Ink, the same firm that produces Amazon’s Kindle monitors. But the Kindle, in part due to the book-publishing focus of its designers and marketers —the device is meant to be as much like reading a book (meaning you don’t search Google on the same page, just because someone whispered something in your ear while you were reading)—, is the paradigm marker.

There is a clear vested interest in not turning the Kindle into a mobile computing platform, because that would make free reading widely available and possibly further undermine Amazon’s sales of hard-copy text, something the Kindle is meant to counter. But Sony’s Reader devices will now make available community library texts and Google Books free copies, and Apple’s iPhone and anticipated touchscreen tablet will put pressure on Amazon to make the Kindle more dynamic and web-ready.

With a hacker successfully running Ubuntu Linux on a Kindle 2, there is an experimental precedent that could lead to new product directions for the Kindle family of devices. Amazon is already experimenting with a very rudimentary web browser, best for text-only viewing, which could benefit from significant optimization in terms of processing power and speed, as well as coordination with web-site and coding designers to make more websites instantly and comfortably text-only.

Privileging text is something the Kindle is designed to do, but making its relationship with non-Kindle media more dynamic, so that the text-first platform Amazon is aiming for can capitalize on the power of the information age. Not aiming for at least that level of interactivity seems, at this point, a bad choice for Amazon.

One blog commenter called the Ubuntu hack for the Kindle “Probably the lowest power netbook in the world”. The comment is telling, because it points out a very important fact of the Kindle paradigm: the e-paper monitor, because it does not require backlighting to operate, only needs power to change what it displays, not to display content, and consumers are aware of this benefit, because it both reduces energy consumption and extends battery life.

E-paper is fundamentally different from LCD and plasma monitors, far less energy intensive, and has its own appeal for a wide range of consumers and industry interests. Creating a fully functional e-paper netbook or handheld computer could be a major breakthrough not only for the device’s producer, but for the direction of publishing and communications technologies.

There is interest, for instance, in using low-energy e-paper to redirect processing power and achieve dramatically higher processing speeds for major computing and web functions (at least where video is not part of the function). Amazon may be ideally positioned to test the ability of e-paper devices to function across computational and web-communications platforms for most communicative functions.

Shuttle unveils X500V all-in-one with Linux

September 4, 2009 · Leave a Comment
Filed under: Featured, OpenSUSE 
Slippery Brick

The machine is called the X50 All-in-One PC and ships with openSUSE Linux preinstalled. The machine is very thin at only 3.6cm thick and comes in black or

Shuttle is well known in the small form factor market and is becoming increasingly known for producing some cool all-in-one computers as well. Shuttle made its name producing barebones small form factor systems that appealed to gamers looking for a desktop computer that was easy to take to and from a LAN party.

Shuttle unveiled a slick new all-in-one computer today called the X500V that will make fans of open source operating system

happy. The machine is called the X50 All-in-One PC and ships with openSUSE Linux preinstalled.

The machine is very thin at only 3.6cm thick and comes in black or white colors. Hardware for the machine includes an Atom 330 dual core CPU, 1GB of RAM, 160GB of storage, a 15.6-inch touchscreen with a resolution of 1366 x 768, a 1.3-megapixel webcam, speakers, and a memory card reader. The machine also has VGA out and integrated Wi-Fi. The system sells for 444 EUR and should make it to the U.S. at some point.

Archiving and Compression in Linux Systems

May 1, 2009 · Leave a Comment
Filed under: Featured, Linux, Ubuntu 

Set Cron in Ubuntu (Linux)

December 15, 2008 · Leave a Comment
Filed under: Featured, Linux, Ubuntu 

What is Cron?

Cron is a daemon used for scheduling tasks to be executed at a certain time. Each user has a crontab file, allowing them to specify actions and times that they should be executed. There is also a system crontab, allowing tasks such as log rotation and locate database updating to be done regularly.

The following directions tell you how to set up scheduled tasks the traditional way using the command line, but it is much easier to use the Gnome Scheduled tasks tool in System –> Preferences

What is Crontab?

A crontab is a simple text file that holds a list of commands that are to be run at specified times. These commands, and their related run times, are controlled by the cron daemon and are executed in the system’s background. More information can be found by viewing the crontab’s man page.

Using Cron

To use cron, simply add entries to your crontab file.

Crontab Sections

Each of the sections is separated by a space, with the final section having one or more spaces in it. No spaces are allowed within Sections 1-5, only between them. Sections 1-5 are used to indicate when and how often you want the task to be executed. This is how a cron job is laid out:

minute (0-59), hour (0-23, 0 = midnight), day (1-31), month (1-12), weekday (0-6, 0 = Sunday), command

01 04 1 1 1 /usr/bin/somedirectory/somecommand

The above example will run /usr/bin/somedirectory/somecommand at 4:01am on January 1st plus every Monday in January. An asterisk (*) can be used so that every instance (every hour, every weekday, every month, etc.) of a time period is used. Code:

01 04 * * * /usr/bin/somedirectory/somecommand

The above example will run /usr/bin/somedirectory/somecommand at 4:01am on every day of every month.

Comma-seperated values can be used to run more than one instance of a particular command within a time period. Dash-seperated values can be used to run a command continuously. Code:

01,31 04,05 1-15 1,6 * /usr/bin/somedirectory/somecommand

The above example will run /usr/bin/somedirectory/somecommand at 01 and 31 past the hours of 4:00am and 5:00am on the 1st through the 15th of every January and June.

The “/usr/bin/somedirectory/somecommand” text in the above examples indicates the task which will be run at the specified times. It is recommended that you use the full path to the desired commands as shown in the above examples. Enter which somecommand in the terminal to find the full path to somecommand. The crontab will begin running as soon as it is properly edited and saved.

Crontab Options

  • The -l option causes the current crontab to be displayed on standard output.
  • The -r option causes the current crontab to be removed.
  • The -e option is used to edit the current crontab using the editor specified by the EDITOR environment variable.

After you exit from the editor, the modified crontab will be checked for accuracy and, if there are no errors, installed automatically. The file is stored in /var/spool/cron/crontabs but should only be edited via the crontab command.

Further Considerations

The above commands are stored in a crontab file belonging to your user account and executed with your level of permissions. If you want to regularly run a command requiring a greater level of permissions, set up a root crontab file using:

sudo crontab -e

Depending on the commands being run, you may need to expand the root users PATH variable by putting the following line at the top of their crontab file:

PATH=/usr/sbin:/usr/bin:/sbin:/bin

It is sensible to test that your cron jobs work as intended. One method for doing this is to set up the job to run a couple of minutes in the future and then check the results before finalising the timing. You may also find it useful to put the commands into script files that log their success or failure, eg:

echo "Nightly Backup Successful: $(date)" >> /tmp/mybackup.log

For more information, see the man pages
for cron and crontab (man is detailed on the BasicCommands page). If your machine is regularly switched off, you may also be interested in at and anacron, which provide other approaches to scheduled tasks. For example, anacron offers simple system-wide directories for running commands hourly, daily, weekly, and monthly. Scripts to be executed in said times can be placed in /etc/cron.hourly/, /etc/cron.daily/, /etc/cron.weekly/, and /etc/cron.monthly/. All scripts in each directory are run as root, and a specific order to running the scripts can be specified by prefixing the scripts’ filenames with numbers (see the man page for run-parts for more details). Although the directories contain periods in their names, run-parts will not accept a file name containing one and will fail silently when encountering them (bug #38022). Either rename the file or use a symlink (without a period) to it instead.

Advanced Crontab

The Crontabs discussed above are user crontabs. Each of the above crontabs is associated with a user, even the system crontab which is associated with the root user. There are two other types of crontab.

Firstly, as mentioned above anacron uses the run-parts command and /etc/cron.hourly, /etc/cron.weekly, and /etc/cron.monthly directories. However anacron itself is invoked from the /etc/crontab file. This file could be used for other cron commands, but probably shouldn’t be. Here’s an example line from a ficticious /etc/crontab:

00 01 * * * rusty /home/rusty/rusty-list-files.sh

This would run Rusty’s command script as user rusty from his home directory. However, it is not usual to add commands to this file. While an experienced user should know about it, it is not recommended that you add anything to /etc/crontab. Apart from anything else, this could cause problem if the /etc/crontab file is affected by updates! Rusty could lose his command.

The second type of crontab is to be found in /etc/cron.d. Within the directory are small named crontabs. The directory is often used by packages, and the small crontabs allows a user to be associated with the commands in them.

Instead of adding a line to /etc/crontab which Rusty knows is not a good idea, Rusty might well add a file to /etc/cron.d with the name rusty, containing his cron line above. This would not be affected by updates but is a well known location.

When would you use these alternate crontab locations? Well, on a single user machine or a shared machine such as a school or college server, a user crontab would be the way to go. But in a large IT department, where several people might look after a server, then /etc/cron.d is probably the best place to install crontabs – it’s a central point and saves searching for them!

You may not need to look at /etc/crontab or /etc/cron.d, let alone edit them by hand. But an experienced user should perhaps know about them and that the packages that he/she installs may use these locations for their crontabs.

Tips

crontab -e uses the EDITOR environment variable. to change the editor to your own choice just set that. You may want to set EDITOR in you .bashrc because many commands use this variable. Let’s set the EDITOR to nano a very easy editor to use:

export EDITOR=nano

There are also files you can edit for system-wide cron jobs. The most common file is located at /etc/crontab, and this file follows a slightly different syntax than a normal crontab file. Since it is the base crontab that applies system-wide, you need to specify what user to run the job as; thus, the syntax is now:

minute(s) hour(s) day(s)_of_month month(s) day(s)_of_week user command

It is recommended, however, that you try to avoid using /etc/crontab unless you need the flexibility offered by it, or if you’d like to create your own simplified anacron-like system using run-parts for example. For all cron jobs that you want to have run under your own user account, you should stick with using crontab -e to edit your local cron jobs rather than editting the system-wide /etc/crontab.

It is possible to run gui applications via cronjobs. This can be done by telling cron which display to use.

00 06 * * * env DISPLAY=:0. gui_appname

The env DISPLAY=:0. portion will tell cron to use the current display (desktop) for the program “gui_appname”.

And if you have multiple monitors, don’t forget to specify on which one the program is to be run. For example, to run it on the first screen (default screen) use :

00 06 * * * env DISPLAY=:0.0 gui_appname

The env DISPLAY=:0.0 portion will tell cron to use the first screen of the current display for the program “gui_appname”.

Crontab Example

Below is an example of how to setup a crontab to run updatedb, which updates the slocate database: Open a term, type “crontab -e” (without the double quotes) and press enter. Type the following line, substituting the full path of the application you wish to run for the one shown below, into the editor:

45 04 * * * /usr/bin/updatedb

Save your changes and exit the editor.

Crontab will let you know if you made any mistakes. The crontab will be installed and begin running if there are no errors. That’s it. You now have a cronjob setup to run updatedb, which updates the slocate database, every morning at 4:45.

Note: The double-ampersand (&&) can also be used in the “command” section to run multiple commands consecutively.

45 04 * * * /usr/sbin/chkrootkit && /usr/bin/updatedb

The above example will run chkrootkit followed by updatedb at 4:45am daily – providing you have all listed apps installed.

Configure Web Logs in Apache

August 24, 2008 · Leave a Comment
Filed under: Apache, Featured, Linux 

One of the many pieces of the Website puzzle is Web logs. Traffic analysis is central to most Websites, and the key to getting the most out of your traffic analysis revolves around how you configure your Web logs.

Apache is one of the most — if not the most — powerful open source solutions for Website operations. You will find that Apache’s Web logging features are flexible for the single Website or for managing numerous domains requiring Web log analysis.

Author’s Note: While most of this piece discusses configuration options for any operating system Apache supports, some of the content will be Unix/Linux (*nix) specific, which now includes Macintosh OS X and its underlying Unix kernel.

For the single site, Apache is pretty much configured for logging in the default install. The initial httpd.conf file (found in /etc/httpd/conf/httpd.conf in most cases) should have a section on logs that looks similar to this (Apache 2.0.x), with descriptive comments for each item. Your default logs folder will be found in /etc/httpd/logs. This location can be changed when dealing with multiple Websites, as we’ll see later. For now, let’s review this section of log configuration.

ErrorLog logs/error_log

LogLevel warn

LogFormat “%h %l %u %t “%r” %>s %b “%{Referer}i” “%{User-Agent}i”" combined

LogFormat “%h %l %u %t “%r” %>s %b” common

LogFormat “%{Referer}i -> %U” referer

LogFormat “%{User-agent}i” agent

CustomLog logs/access_log combined

Error Logs

The error log contains messages sent from Apache for errors encountered during the course of operation. This log is very useful for troubleshooting Apache issues on the server side.

Apache Log Tip: If you are monitoring errors or testing your server, you can use the command line to interactively watch log entries. Open a shell session and type "tail –f /path/to/error_log". This will show you the last few entries in the file and also continue to show new entries as they occur.

There are no real customization options available, other than telling Apache where to establish the file, and what level of error logging you seek to capture. First, let’s look at the error log configuration code from httpd.conf.

ErrorLog logs/error_log

You may wish to store all error-related information in one error log. If so, the above is fine, even for multiple domains. However, you can specify an error log file for each individual domain you have. This is done in the <VirtualHost> container with an entry like this:

<VirtualHost 10.0.0.2>

DocumentRoot "/home/sites/domain1/html/"

ServerName domain1.com

ErrorLog /home/sites/domain1/logs/error.log

</VirtualHost>

If you are responsible for reviewing error log files as a server administrator, it is recommended that you maintain a single error log. If you’re hosting for clients, and they are responsible for monitoring the error logs, it’s more convenient to specify individual error logs they can access at their own convenience.

The setting that controls the level of error logging to capture follows below.

LogLevel warn

Apache’s definitions for their error log levels are as follows:

1299_apachelogstable1 (click to view image)

Tracking Website Activity

Often by default, Apache will generate three activity logs: access, agent and referrer. These track the accesses to your Website, the browsers being used to access the site and referring urls that your site visitors have arrived from.

It is commonplace now to utilize Apache’s “combined” log format, which compiles all three of these logs into one logfile. This is very convenient when using traffic analysis software as a majority of these third-party programs are easiest to configure and schedule when only dealing with one log file per domain.

Let’s break down the code in the combined log format and see what it all means.

LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" combined

LogFormat starts the line and simply tells Apache you are defining a log file type (or nickname), in this case, combined. Now let’s look at the cryptic symbols that make up this log file definition.

1299_apachelogstable2 (click to view image)

To review all of the available configuration codes for generating a custom log, see Apache’s docs on the module_log_config, which powers log files in Apache.

Apache Log Tip: You could capture more from the HTTP header if you so desired. A full listing and definition of data in the header is found at the World Wide Web Consortium.

For a single Website, the default entry would suffice:

CustomLog logs/access_log combined

However, for logging multiple sites, you have a few options. The most common is to identify individual log files for each domain. This is seen in the example below, again using the log directive within the <VirtualHost> container for each domain.

<VirtualHost 10.0.0.2>

DocumentRoot "/home/sites/domain1/html/"

ServerName domain1.com

ErrorLog /home/sites/domain1/logs/error.log

CustomLog /home/sites/domain1/logs/web.log

</VirtualHost>

<VirtualHost 10.0.0.3>

DocumentRoot “/home/sites/domain2/html/”

ServerName domain2.com

ErrorLog /home/sites/domain2/logs/error.log

CustomLog /home/sites/domain2/logs/web.log

</VirtualHost>

<VirtualHost 10.0.0.4>

DocumentRoot “/home/sites/domain3/html/”

ServerName domain3.com

ErrorLog /home/sites/domain3/logs/error.log

CustomLog /home/sites/domain3/logs/web.log

</VirtualHost>

In the above example, we have three domains with three unique Web logs (using the combined format we defined earlier). A traffic analysis package could then be scheduled to process these logs and generate reports for each domain independently.

This method works well for most hosts. However, there may be situations where this could become unmanageable. Apache recommends a special single log file for large virtual host environments and provides a tool for generating individual logs per individual domain.

We will call this log type the cvh for
mat, standing for “common virtual host.” Simply by adding a %v (which stands for virtual host) to the beginning of the combined log format defined earlier and giving it a new nickname of cvh, we can compile all domains into one log file, then automatically split them into individual log files for processing by a traffic analysis package.

LogFormat "%v %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" cvh

In this case, we do not make any CustomLog entries in the <VirtualHost> containers and simply have one log file generated by Apache. A program created by Apache called split_logfile is included in the src/support directory of your Apache sources. If you did not compile from source or do not have the sources, you can get the Perl script.

The individual log files created from your master log file will be named for each domain (virtual host) and look like: virtualhost.log.

Log Rotation

Finally, we want to address log rotation. High traffic sites will generate very large log files, which will quickly swallow up valuable disk space on your server. You can use log rotation to manage this process.

There are many ways to handle log rotation, and various third-party tools are available as well. However, we’re focusing on configurations native to Apache, so we will look at a simple log rotation scheme here. I’ll include links to more flexible and sophisticated log rotation options in a moment.

This example uses a rudimentary shell script to move the current Web log to an archive log, compresses the old file and keeps an archive for as long as 12 months, then restarts Apache with a pause to allow the log files to be switched out.

mv web11.tgz web12.tgz

mv web10.tgz web11.tgz

mv web9.tgz web10.tgz

mv web8.tgz web9.tgz

mv web7.tgz web8.tgz

mv web6.tgz web7.tgz

mv web5.tgz web6.tgz

mv web4.tgz web5.tgz

mv web3.tgz web4.tgz

mv web2.tgz web3.tgz

mv web1.tgz web2.tgz

mv web.tgz web1.tgz

mv web.log web.old

/usr/sbin/apachectl graceful

sleep 300

tar cvfz web.tgz web.old

This code can be copied into a file called logrotate.sh, and placed inside the folder where your web.log file is stored (or whatever you name your log file, e.g. access_log, etc.). Just be sure to modify for your log file names and also chmod (change permissions on the file) to 755 so it becomes an executable.

This works fine for a single busy site. If you have more complex requirements for log rotation, be sure to see some of the following sites. In addition, many Linux distributions now come with a log rotation included. For example, Red Hat 9 comes with logrotate.d, a log rotation daemon which is highly configurable. To find out more, on your Linux system with logrotate.d installed, type man logrotate.

Apache Module mod_log_config

August 24, 2008 · Leave a Comment
Filed under: Apache, Featured, Linux 

Apache Module mod_log_config

Summary

This module provides for flexible logging of client requests. Logs are written in a customizable format, and may be written directly to a file, or to an external program. Conditional logging is provided so that individual requests may be included or excluded from the logs based on characteristics of the request.

Three directives are provided by this module: TransferLog to create a log file, LogFormat to set a custom format, and CustomLog to define a log file and format in one step. The TransferLog and CustomLog directives can be used multiple times in each server to cause each request to be logged to multiple files.

Directives

Topics

See also

Custom Log Formats

The format argument to the LogFormat and CustomLog directives is a string. This string is used to log each request to the log file. It can contain literal characters copied into the log files and the C-style control characters “n” and “t” to represent new-lines and tabs. Literal quotes and back-slashes should be escaped with back-slashes.

The characteristics of the request itself are logged by placing “%” directives in the format string, which are replaced in the log file by the values as follows:

Format String Description
%% The percent sign (Apache 2.0.44 and later)
%...a Remote IP-address
%...A Local IP-address
%...B Size of response in bytes, excluding HTTP headers.
%...b Size of response in bytes, excluding HTTP headers. In CLF format, i.e. a ‘-‘ rather than a 0 when no bytes are sent.
%...{Foobar}C The contents of cookie Foobar in the request sent to the server.
%...D The time taken to serve the request, in microseconds.
%...{FOOBAR}e The contents of the environment variable FOOBAR
%...f Filename
%...h Remote host
%...H The request protocol
%...{Foobar}i The contents of Foobar: header line(s) in the request sent to the server.
%...l Remote logname (from identd, if supplied). This will return a dash unless IdentityCheck is set On.
%...m The request method
%...{Foobar}n The contents of note Foobar from another module.
%...{Foobar}o The contents of Foobar: header line(s) in the reply.
%...p The canonical port of the server serving the request
%...P The process ID of the child that serviced the request.
%...{format}P The process ID or thread id of the child that serviced the request. Valid formats are pid and tid. (Apache 2.0.46 and later)
%...q The query string (prepended with a ? if a query string exists, otherwise an empty string)
%...r First line of request
%...s Status. For requests that got internally redirected, this is the status of the *original* request — %...>s for the last.
%...t Time the request was received (standard english format)
%...{format}t The time, in the form given by format, which should be in strftime(3) format. (potentially localized)
%...T The time taken to serve the request, in seconds.
%...u Remote user (from auth; may be bogus if return status (%s) is 401)
%...U The URL path requested, not including any query string.
%...v The canonical ServerName of the server serving the request.
%...V The server name according to the UseCanonicalName setting.
%...X Connection status when response is completed:

X = connection aborted before the response completed.
+ = connection may be kept alive after the response is sent.
- = connection will be closed after the response is sent.

(This directive was %...c in late versions of Apache 1.3, but this conflicted with the historical ssl %...{var}c syntax.)

%...I Bytes received, including request and headers, cannot be zero. You need to enable mod_logio to use this.
%...O Bytes sent, including headers, cannot be zero. You need to enable mod_logio to use this.

The “” can be nothing at all (e.g., "%h %u %r %s %b"), or it can indicate conditions for inclusion of the item (which will cause it to be replaced with “-” if the condition is not met). The forms of condition are a list of HTTP status codes, which may or may not be preceded by “!”. Thus, “%400,501{User-agent}i” logs User-agent: on 400 errors and 501 errors (Bad Request, Not Implemented) only; “%!200,304,302{Referer}i” logs Referer: on all requests which did not return some sort of normal status.

The modifiers “<” and “>” can be used for requests that have been internally redirected to choose whether the original or final (respectively) request should be consulted. By default, the % directives %s, %U, %T, %D, and %r look at the original request while all others look at the final request. So for example, %>s can be used to record the final status of the request and %<u can be used to record the original authenticated user on a request that is internally redirected to an unauthenticated resource.

Note that in httpd 2.0 versions prior to 2.0.46, no escaping was performed on the strings from %...r, %...i and %...o. This was mainly to comply with the requirements of the Common Log Format. This implied that clients could insert control characters into the log, so you had to be quite careful when dealing with raw log files.

For security reasons, starting with 2.0.46, non-printable and other special characters are escaped mostly by using xhh sequences, where hh stands for the hexadecimal representation of the raw byte. Exceptions from this rule are " and which are escaped by prepending a backslash, and all whitespace characters which are written in their C-style notation (n, t etc).

Note that in httpd 2.0, unlike 1.3, the %b and %B format strings do not represent the number of bytes sent to the client, but simply the size in bytes of the HTTP response (which will differ, for instance, if the connection is aborted, or if SSL is used). The %O format provided by mod_logio will log the actual number of bytes sent over the network.

Some commonly used log format strings are:

Common Log Format (CLF)
"%h %l %u %t "%r" %>s %b"
Common Log Format with Virtual Host
"%v %h %l %u %t "%r" %>s %b"
NCSA extended/combined log format
"%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i""
Referer log format
"%{Referer}i -> %U"
Agent (Browser) log format
"%{User-agent}i"

Note that the canonical ServerName and Listen of the server serving the request are used for %v and %p respectively. This happens regardless of the UseCanonicalName setting because otherwise log analysis programs would have to duplicate the entire vhost matching algorithm in order to decide what host really served the request.

Security Considerations

See the security tips document for details on why your security could be compromised if the directory where logfiles are stored is writable by anyone other than the user that starts the server.

BufferedLogs Directive

Description: Buffer log entries in memory before writing to disk
Syntax: BufferedLogs On|Off
Default: BufferedLogs Off
Context: server config
Status: Base
Module: mod_log_config
Compatibility: Available in versions 2.0.41 and later.

The BufferedLogs directive causes mod_log_config to store several log entries in memory and write them toget
her to disk, rather than writing them after each request. On some systems, this may result in more efficient disk access and hence higher performance. It may be set only once for the entire server; it cannot be configured per virtual-host.

This directive is experimental and should be used with caution.

CookieLog Directive

Description: Sets filename for the logging of cookies
Syntax: CookieLog filename
Context: server config, virtual host
Status: Base
Module: mod_log_config
Compatibility: This directive is deprecated.

The CookieLog directive sets the filename for logging of cookies. The filename is relative to the ServerRoot. This directive is included only for compatibility with mod_cookies, and is deprecated.

CustomLog Directive

Description: Sets filename and format of log file
Syntax: CustomLog file|pipe format|nickname [env=[!]environment-variable]
Context: server config, virtual host
Status: Base
Module: mod_log_config

The CustomLog directive is used to log requests to the server. A log format is specified, and the logging can optionally be made conditional on request characteristics using environment variables.

The first argument, which specifies the location to which the logs will be written, can take one of the following two types of values:

file
A filename, relative to the ServerRoot.
pipe
The pipe character “|“, followed by the path to a program to receive the log information on its standard input.

Security:

If a program is used, then it will be run as the user who started httpd. This will be root if the server was started by root; be sure that the program is secure.

Note

When entering a file path on non-Unix platforms, care should be taken to make sure that only forward slashed are used even though the platform may allow the use of back slashes. In general it is a good idea to always use forward slashes throughout the configuration files.

The second argument specifies what will be written to the log file. It can specify either a nickname defined by a previous LogFormat directive, or it can be an explicit format string as described in the log formats section.

For example, the following two sets of directives have exactly the same effect:

# CustomLog with format nickname

LogFormat "%h %l %u %t "%r" %>s %b" common

CustomLog logs/access_log common

# CustomLog with explicit format string

CustomLog logs/access_log “%h %l %u %t “%r” %>s %b”

The third argument is optional and controls whether or not to log a particular request based on the presence or absence of a particular variable in the server environment. If the specified environment variable is set for the request (or is not set, in the case of a ‘env=!name‘ clause), then the request will be logged.

Environment variables can be set on a per-request basis using the mod_setenvif and/or mod_rewrite modules. For example, if you want to record requests for all GIF images on your server in a separate logfile but not in your main log, you can use:

SetEnvIf Request_URI .gif$ gif-image

CustomLog gif-requests.log common env=gif-image

CustomLog nongif-requests.log common env=!gif-image

Or, to reproduce the behavior of the old RefererIgnore directive, you might use the following:

SetEnvIf Referer example.com localreferer

CustomLog referer.log referer env=!localreferer

LogFormat Directive

Description: Describes a format for use in a log file
Syntax: LogFormat format|nickname [nickname]
Default: LogFormat "%h %l %u %t "%r" %>s %b"
Context: server config, virtual host
Status: Base
Module: mod_log_config

This directive specifies the format of the access log file.

The LogFormat directive can take one of two forms. In the fi
rst form, where only one argument is specified, this directive sets the log format which will be used by logs specified in subsequent TransferLog directives. The single argument can specify an explicit format as discussed in the custom log formats section above. Alternatively, it can use a nickname to refer to a log format defined in a previous LogFormat directive as described below.

The second form of the LogFormat directive associates an explicit format with a nickname. This nickname can then be used in subsequent LogFormat or CustomLog directives rather than repeating the entire format string. A LogFormat directive that defines a nickname does nothing else — that is, it only defines the nickname, it doesn’t actually apply the format and make it the default. Therefore, it will not affect subsequent TransferLog directives. In addition, LogFormat cannot use one nickname to define another nickname. Note that the nickname should not contain percent signs (%).

Example

LogFormat "%v %h %l %u %t "%r" %>s %b" vhost_common

TransferLog Directive

Description: Specify location of a log file
Syntax: TransferLog file|pipe
Context: server config, virtual host
Status: Base
Module: mod_log_config

This directive has exactly the same arguments and effect as the CustomLog directive, with the exception that it does not allow the log format to be specified explicitly or for conditional logging of requests. Instead, the log format is determined by the most recently specified LogFormat directive which does not define a nickname. Common Log Format is used if no other format has been specified.

Example

LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i""

TransferLog logs/access_log

Web stats- Urchin

This software is used to track the logs of IIS and apache and displays reports.

  • In-house Flexibility: Configure Urchin to fit your specific requirements and process/reprocess log files as frequently as you wish.
  • Great for intranets: Analyze firewall-protected content, such as corporate intranets, without any outside internet connection.
  • Pagetags or IP+User Agent: Choose which methodology works best for you. You can even have the pagetags make a call to your Google Analytics account and run both products together allowing you to audit the pre and post processed data.
  • Advanced Visitor Segmentation: Cross segment visitor behavior by language, geographic location, and other factors.
  • Geo-targeting: Find out where your visitors come from and which markets have the greatest profit potential.
  • Funnel Visualization: Eliminate conversion bottlenecks and reduce the numbers of prospects who drift away unconverted.
  • Complete Conversion Metrics: See ROI, revenue per click, average visitor value and more.
  • Keyword Analysis: Compare conversion metrics across search engines and keywords.
  • A/B Test Reporting: Test banner ads, emails, and keywords and fine-tune your creative content for better results.
  • Ecommerce Analytics: Trace transactions to campaigns and keywords, get loyalty and latency metrics, and see product merchandising reports.
  • Search engine robots, server errors and file type reports: Get the stuff that only log data can report on.
  • Visitor History Drilldown: dig into visitor behavior with the ability to view session/path, platform, geo-location, browser/platform, etc. data on an individual-visitor basis (note: this data is anonymous).
Feature Urchin 6 Google Analytics
Install and manage on your own servers Yes No
Can be used on firewall-protected corporate intranets Yes No
Reprocess historical data (from logfiles) Yes No
Can process/re-process your log files locally Yes No
Can collect information through tags No Yes
Reports on robot/spider activity Yes No
Reports on server errors/status codes Yes No
Tightly integrated with AdWords No Yes
Can report on paid search campaigns Yes Yes
Ecommerce/Conversion reporting Yes Yes
Geotargeting Yes Yes
Free No Yes
Visitor session/navigation path analyses Yes No
Raw data accessible for custom report-building Yes No
Exclusively supported by authorized consultants Yes No

http://www.urchin.com

Next Page »