first however you need to build the locate database before you can use it.
run the script
ok that was to many files, do
locate .html |more
ok locate all the instances of
how many html files are there?
locate .html |wc -l
locate your netscape binary if netscape is on your machine...
is netscape in your path?
lets make a symlink
ln -s /usr/local/netscape/netscape /usr/local/bin/netscape
get the file:
how big is the file?
now, uncompress the file.
do an ls -l
that file is pretty big (150MB)
lets recompress it
gzip -9 access.log.9
hm.... that going to take a while, lets backgound it.
at the new prompt:
process is now running in the background
ps... hmm... a few processes.
lets find just that process,
ok we could go do something else but it should be done now...
lets just manipulate the log file compressed.
take a look at what the contents of the file look like,
hm... looks like squid log file
what useful things can we learn from this file?
in a squid log file each request gets it's own line so if we count the lines in the file we can figure out how many requests there werre that day.
zcat access.log.9.gz|wc -l
ok that's a big number
now lets get raw hits....
zcat access.log.9.gz|grep HIT|wc -l