grep prints lines that contain a match for a pattern. The general synopsis of the grep command line is grep options pattern input_file_namesThere can be zero or more options. pattern will only be seen as such (and not as an input_file_name ) if it wasn’t already specified within options (by using the ‘ -e pattern ’ or ‘ -f file’ options). There can be zero or more input_file_names.Matching Control
-e pattern
--regexp= pattern
Use pattern as the pattern. This can be used to specify multiple search patterns, or to protect a pattern beginning with a ‘ - ’. (-e is specified by POSIX.)-f file
--file= file
Obtain patterns from file, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.)-i
-y
--ignore-case
Ignore case distinctions, so that characters that differ only in case match each other. Although this is straightforward when letters differ in case only via lowercase-uppercase pairs, the behavior is unspecified in other situations. For example, uppercase “S” has an unusual lowercase counterpart “ſ” (Unicode character U+017F, LATIN SMALL LETTER LONG S) in many locales, and it is unspecified whether this unusual character matches “S” or “s” even though uppercasing it yields “S”. Another example: the lowercase German letter “ß” (U+00DF, LATIN SMALL LETTER SHARP S) is normally capitalized as the two-character string “SS” but it does not match “SS”, and it might not match the uppercase letter “ẞ” (U+1E9E, LATIN CAPITAL LETTER SHARP S) even though lowercasing the latter yields the former.-y is an obsolete synonym that is provided for compatibility. (-i is specified by POSIX.)
-v
--invert-match
Invert the sense of matching, to select non-matching lines. (-v is specified by POSIX.)-w
--word-regexp
Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified.-x--line-regexpSelect only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ‘ ^ ’ and ‘ $’. (-x is specified by POSIX.)Examples:
to find authentication logs for “root” on an debian system:# grep "root" /var/log/auth.log
For example, we can see that when someone failed to login as an admin, they also failed the reverse mapping which means they might not have a valid domain name.# grep -B 3 -A 2 'Invalid user' /var/log/auth.log
To find authentication logs at current system date# grep "$(date +%b) $(date +%e)" /var/log/auth.log | grep 'fail\|preauth'
To find authentication logs at current system hour# grep "$(date +%b) $(date +%e) $(date +%H:)" /var/log/auth.log
To find mail logs at current system date# grep "$(date +%b) $(date +%e)" /var/log/mail.info
To find mail logs at one hour before current system date# grep "$(date --date="1 hours ago" +%b) $(date --date="1 hours ago" +%e)" /var/log/mail.info
A list of date command field descriptors from http://www.cyberciti.biz/faq/unix-linux-bash-get-time/ (as a copy)
References:
Tuesday, March 22, 2016
GNU grep
Core util: cat, head, tail, sort, uniq and cut
cat
cat copies each file (‘ -’ means standard input), or standard input if none are given, to standard output.
Synopsis:
Synopsis:
cat [ option ] [ file]…
head
head prints the first part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of - .
Synopsis:
Synopsis:
head [ option ]… [ file]…# head /var/log/auth.log
tail
tail prints the last part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of ‘ -’.
Synopsis:
Synopsis:
tail [ option ]… [ file]…# tail /var/log/auth.log
sort
sort sorts, merges, or compares all the lines from the given files, or standard input if none are given or for a file of ‘ - ’. By default, sort writes the results to standard output.
Synopsis:
Synopsis:
sort [ option ]… [ file]…options:
‘ -n ’
‘ --numeric-sort ’
‘ --sort=numeric ’
Sort numerically. The number begins each line and consists of optional blanks, an optional ‘ -’ sign, and zero or more digits possibly separated by thousands separators, optionally followed by a decimal-point character and zero or more digits. An empty number is treated as ‘ 0 ’. The LC_NUMERIC locale specifies the decimal-point character and thousands separator. By default a blank is a space or a tab, but the LC_CTYPE locale can change this.
‘ --numeric-sort ’
‘ --sort=numeric ’
Sort numerically. The number begins each line and consists of optional blanks, an optional ‘ -’ sign, and zero or more digits possibly separated by thousands separators, optionally followed by a decimal-point character and zero or more digits. An empty number is treated as ‘ 0 ’. The LC_NUMERIC locale specifies the decimal-point character and thousands separator. By default a blank is a space or a tab, but the LC_CTYPE locale can change this.
uniq
uniq writes the unique lines in the given
input , or standard input if nothing is given or for an input name of ‘ -’.
Synopsis:
input , or standard input if nothing is given or for an input name of ‘ -’.
Synopsis:
uniq [ option ]… [ input [output ]]options:
‘ -c ’
‘ --count ’
Print the number of times each line occurred along with the line.
‘ --count ’
Print the number of times each line occurred along with the line.
cut
cut writes to standard output selected parts of each line of each input file, or standard input if no files are given or for a file name of ‘ - ’.
Synopsis:
Synopsis:
cut option… [ file]…Options:
‘ -d input_delim_byte ’
‘ --delimiter= input_delim_byte ’
With -f, use the first byte of
input_delim_byte as the input fields separator (default is TAB).
‘ -f field-list ’
‘ --fields= field-list ’
Select for printing only the fields listed in field-list . Fields are separated by a TAB character by default. Also print any line that contains no delimiter character, unless the --only-delimited (-s) option is specified.
Note awk supports more sophisticated field processing, and by default will use (and discard) runs of blank characters to separate fields, and ignore leading and trailing blanks.
awk '{print $2}' # print the second field
awk '{print $NF-1}' # print the penultimate field
awk '{print $2,$1}' # reorder the first two fields
In the unlikely event that awk is unavailable, one can use the join command, to process blank characters as
awk does above.
join -a1 -o 1.2 - /dev/null # print the second field
join -a1 -o 1.2,1.1 - /dev/null # reorder the first two fields
Example: a quick way to see which IP addresses are most active is to sort by them:
# cat access.log |cut -d ' ' -f 1 |sort
UPDATE: even easier: the uniq command has a -c argument that does most of this work automatically. It counts the occurrences of each unique line. Then a quick sort -n and a tail shows the big ones. Also, I tend to use "cut" as above, but one of the Dreamhost guys reminded me that awk may be a little more straightforward:
# cat /path/to/access.log |awk '{print $1}' |sort |uniq -c |sort -n |tail
References:
- https://www.gnu.org/software/coreutils/manual/html_node/index.html
- https://encodable.com/tech/blog/2008/12/17/Count_IP_Addresses_in_Access_Log_File_BASH_OneLiner
Monday, March 21, 2016
Virtualmin creating Sub-Server for Sub-Domain
These are steps to create Sub-Server for Sub-Domain:
- Virtualmin -> Create Virtual Server
- Fill Domain name for sub-domain for example subdom.domain.com (domain.com is root domain, change with your root domain)
- In Enabled features check these items:
- Setup DNS zone?
- Setup website for domain?
- Setup SSL website too?
- Choose one of these Setup Webalizer for web logs? or Enable AWstats reporting? (optional)
- Choose other options depend on your requirement
Friday, March 18, 2016
Linux awk
Awk can do most things that are actually text processing.
An awk program follows the form:
pattern { action }
awk is line oriented. That is, the pattern specifies a test that is performed with each line read as input. If the condition is true, then the action is taken. The default pattern is something that matches every line. This is the blank or null pattern.
awk program below:
awk program below:
BEGIN { print "START" } { print } END { print "STOP" }
Example:
BEGIN { print "File\tOwner"}{ print $8, "\t", $3}END { print " - DONE -" }
Example awk_example1.awk
#!/bin/awk -f BEGIN { print "File\tOwner" }{ print $8, "\t", $3}END { print " - DONE -" }
In its simplest usage awk is meant for processing column-oriented text data, such as tables, presented to it on standard input. The variables $1, $2, and so forth are the contents of the first, second, etc. column of the current input line. For example, to print the second column of a file, you might use the following simple awk script:
awk < file '{ print $2 }'This means "on every line, print the second field".
By default awk splits input lines into fields based on whitespace, that is, spaces and tabs. You can change this by using the -F option to awk and supplying another character. For instance, to print the home directories of all users on the system, you might do
awk < /etc/passwd -F: '{ print $6 }'since the password file has fields delimited by colons and the home directory is the 6th field.
Awk is a weakly typed language; variables can be either strings or numbers, depending on how they're referenced. All numbers are floating-point. So to implement the fahrenheit-to-celsius calculator, you might write
awk '{ print ($1-32)*(5/9) }'which will convert fahrenheit temperatures provided on standard input to celsius until it gets an end-of-file.
echo 5 4 | awk '{ print $1 + $2 }'prints 9, whileecho 5 4 | awk '{ print $1 $2 }'prints 54. Note thatecho 5 4 | awk '{ print $1, $2 }'prints "5 4".
awk has some built-in variables that are automatically set; $1 and so on are examples of these. The other builtin variables that are useful for beginners are generally NF, which holds the number of fields in the current input line ($NF gives the last field), and $0, which holds the entire current input line.
You can make your own variables, with whatever names you like (except for reserved words in the awk language) just by using them. You do not have to declare variables. Variables that haven't been explicitly set to anything have the value "" as strings and 0 as numbers.
For example, the following code prints the average of all the numbers on each line:
awk '{ tot=0; for (i=1; i<=NF; i++) tot += $i; print tot/NF; }'awk '{ tot += $1; n += 1; } END { print tot/n; }'
Note the use of two different block statements. The second one has END in front of it; this means to run the block once after all input has been processed.
You can also supply regular expressions to match the whole line against:
awk ' /^test/ { print $2 }'
The block conditions BEGIN and END are special and are run before processing any input, and after processing all input, respectively.
awk supports loop and conditional statements like in C, that is, for, while, do/while, if, and if/else.
awk '{ for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n"; }'Note the use of NF to iterate over all the fields and the use of printf to place newlines explicitly.
finding everything within the last 2 hours:
awk -vDate=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date {print Date, $0}' access_log
Note: date is stored in field 4
To find something between 2-4 hrs ago:
awk -vDate=`date -d'now-4 hours' +[%d/%b/%Y:%H:%M:%S` -vDate2=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date && $4 < Date2 {print Date, Date2, $4} access_log'
The following will show you the IPs of every user who requests the index page sorted by the number of hits:
awk -F'[ "]+' '$7 == "/" { ipcount[$1]++ } END { for (i in ipcount) { printf "%15s - %d\n", i, ipcount[i] } }' logfile.log
$7 is the requested url. You can add whatever conditions you want at the beginning. Replace the '$7 == "/" with whatever information you want.
If you replace the $1 in (ipcount[$1]++), then you can group the results by other criteria. Using $7 would show what pages were accessed and how often. Of course then you would want to change the condition at the beginning. The following would show what pages were accessed by a user from a specific IP:
awk -F'[ "]+' '$1 == "1.2.3.4" { pagecount[$7]++ } END { for (i in pagecount) { printf "%15s - %d\n", i, pagecount[i] } }' logfile.log
You can also pipe the output through sort to get the results in order, either as part of the shell command, or also in the awk script itself:
awk -F'[ "]+' '$7 == "/" { ipcount[$1]++ } END { for (i in ipcount) { printf "%15s - %d\n", i, ipcount[i] | sort } }' logfile.log
Example how to remove duplicate lines in text file
awk '!x[$0]++' [text_file_name]
There are only a few commands in AWK. The list and syntax follows:
- if ( conditional ) statement [ else
- statement ]
- while ( conditional ) statement
- for ( expression ; conditional ; expression )
- statement
- for ( variable in array ) statement
- break
- continue
- { [ statement ] ...}
- variable = expression
- print [ expression-list ] [ > expression ]
- printf format [ , expression-list ] [ >
- expression ]
- next
- exit
Example:
#!/bin/awk -f BEGIN {# Print the squares from 1 to 10 the first wayi=1;while (i <= 10) {printf "The square of ", i, " is ", i*i;i = i+1;}# do it again, using more concise codefor (i=1; i <= 10; i++) {printf "The square of ", i, " is ", i*i;}# now end exit; }
abbreviation:
- NF : number of field
- NR : number of record
- FS : field ssparator
- RS: record separator FS="\n"
- ORS : output record separator ORS="\r\n"
- FILENAME
http://dedetoknotes.blogspot.co.id/2016/03/linux-awk.html
References:
- http://www.hcs.harvard.edu/~dholland/computers/awk.html
- http://stackoverflow.com/questions/7706095/find-entries-in-log-file-within-timespan-eg-the-last-hour
- http://www.grymoire.com/Unix/Awk.html
- http://serverfault.com/questions/11028/do-you-have-any-useful-awk-and-grep-scripts-for-parsing-apache-logs
- http://stackoverflow.com/questions/11532157/unix-removing-duplicate-lines-without-sorting
How to remove fglrx and replacing with AMD/ATI in Debian Jessie
"The fglrx driver is incompatible with the GNOME desktop released as part
of Debian 8 "Jessie", as it does not support the EGL interface (release notes). It is recommended to use the free radeon driver instead."
After installing fglrx, I can not enter Gnome. I share how to remove fglrx driver and resintall AMD/ATI in Debian Jessie.
After installing fglrx, I can not enter Gnome. I share how to remove fglrx driver and resintall AMD/ATI in Debian Jessie.
- Remove all fglrx driver
# apt-get remove fglrx-driver fglrx-atieventsd libfglrx - Add this repository into /etc/apt/sources.list
deb http://httpredir.debian.org/debian/ jessie main contrib non-free - Install or reinstall firmware
# apt-get install firmware-linux-free firmware-linux-nonfree
# apt-get install --reinstall firmware-linux-free firmware-linux-nonfree - Install or reinstall radeon driver (see https://wiki.debian.org/AtiHowTo for your system)
# apt-get install xserver-xorg-video-radeon libdrm-radeon1 radeontool
# apt-get install --reinstall xserver-xorg-video-radeon libdrm-radeon1 radeontool - Replace /etc/X11/xorg.conf with your working configuration. In my case /etc/X11/xorg.conf.original-0.
- You can start your Gnome
# startx
- A8-4500M
- AMD Radeon® Mobility™ HD7640G + HD 7470M Dual Graphics with 1GB DDR3 VRAM
- https://wiki.debian.org/AtiHowTo
- https://wiki.debian.org/ATIProprietary
Subscribe to:
Posts (Atom)