* Что делают следующие команды и как вы из будете использовать?
 * ```tee```
tee is normally used to split the output of a program so that it can be both displayed and saved in a file.
The command can be used to capture intermediate output before the data is altered by another command or program.
The tee command reads standard input, then writes its content to standard output. It simultaneously copies the result into the specified file(s) or variables.
The syntax differs depending on the command's implementation:

tee [ -a ] [ -i ] [ File ... ]

File ... A list of files, each of which receives the output.

-a Appends the output to each file, rather than overwriting it.
-i Ignores interrupts.
The command returns the following exit values (exit status):

0 The standard input was successfully copied to all output files.
>0 An error occurred.
Using process substitution lets more than one process read the standard output of the originating process. Read this example from GNU Coreutils, tee invocation.

Note: If a write to any successfully opened File operand is not successful, writes to other successfully opened File operands and standard output will continue, but the exit value will be >0. * ```awk```
awk - Finds and Replaces text, database sort/validate/index

awk 'Program' input-file1 input-file2 ... awk -f PROGRAM-FILE input-file1 input-file2 ...
awk command searches files for text containing a pattern. When a line or text matches, awk performs a specific action on that line/text. The Program statement tells awk what operation to do; Program statement consists of a series of "rules" where each rule specifies one pattern to search for, and one action to perform when a particular pattern is found. A regular expression enclosed in slashes (/) is an awk pattern to match every input record whose text belongs to that set.

Tag Description
--field-separator FS Use FS for the input field separator (the value of the 'FS' predefined variable).
--file PROGRAM-FILE Read the awk program source from the file PROGRAM-FILE, instead of from the first command line argument.
-mf NNN
-mr NNN The 'f' flag sets the maximum number of fields, and the 'r' flag sets the maximum record size. These options are ignored by 'gawk', since 'gawk' has no predefined limits; they are only for compatibility with the Bell Labs research version of Unix awk.
--assign VAR=VAL Assign the variable VAR the value VAL before program execution begins.
-W traditional
-W compat
--compat Use compatibility mode, in which 'gawk' extensions are turned off.
-W lint
--lint Give warnings about dubious or non-portable awk constructs.
-W lint-old
--lint-old Warn about constructs that are not available in the original Version 7 Unix version of awk.
-W posix
--posix Use POSIX compatibility mode, in which 'gawk' extensions are turned off and additional restrictions apply.
-W re-interval
--re-interval Allow interval expressions, in regexps.
--source PROGRAM-TEXT Use PROGRAM-TEXT as awk program source code. This option allows mixing command line source code with source code from files, and is particularly useful for mixing command line programs with library functions.
-- Signal the end of options. This is useful to allow further arguments to the awk program itself to start with a '-'. This is mainly for consistency with POSIX argument parsing conventions.
'Program' A series of patterns and actions
Input-File If no Input-File is specified then awk applies the Program to "standard input", (piped output of some other command or the terminal. Typed input will continue until end-of-file (typing 'Control-d')
To return the second item($2) from each line of the output from an ls - l listing.

$ ls -l | awk '{print $2}'
To print the Row Number (NR), then a dash and space ("- ") and then the first item ($1) from each line in sample.txt.

First create a sample.txt file

Sample Line 1
Sample Line 2
Sample Line 3
$ awk '{print NR "- " $1 }' sample.txt
1 - Sample
2 - Sample
3 - Sample
To print the first item ($1) and then the second last item $(NF-1) from each line in sample.txt.

$ awk '{print $1, $(NF-1) }' sample.txt
Sample Line
Sample Line
Sample Line
To print non-empty line from a file.

$ awk 'NF > 0' sample.txt
To print the length of the longest input line.

$ awk '{ if (length($0) > max) max = length($0) } END { print max }' sample.txt
To print seven random numbers from zero to 100, inclusive.

$ awk 'BEGIN { for (i = 1; i <= 7; i++) print int(101 * rand()) }'
To count the lines in a file

$ awk 'END { print NR }' sample.txt
3 * ```tr```

tr is an UNIX utility for translating, or deleting, or squeezing repeated characters. It will read from STDIN and write to STDOUT.

tr stands for translate.

The syntax of tr command is:

$ tr [OPTION] SET1 [SET2]
If both the SET1 and SET2 are specified and ‘-d’ OPTION is not specified,
then tr command will replace each characters in SET1 with each character in same position in SET2.

1. Convert lower case to upper case
The following tr command is used to convert the lower case to upper case

$ tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ
The following command will also convert lower case to upper case

$ tr [:lower:] [:upper:]
You can also use ranges in tr. The following command uses ranges to convert lower to upper case.

$ tr a-z A-Z

* ```cut```
Linux command cut is used for text processing. You can use this command to extract portion of text from a file by selecting columns.

This tutorial provides few practical examples of cut command that you can use in your day to day command line activities.

For most of the example, we’ll be using the following test file.

$ cat test.txt
cat command for file oriented operations.
cp command for copy files or directories.
ls command to list out files and directories with its attributes.
1. Select Column of Characters
To extract only a desired column from a file use -c option. The following example displays 2nd character from each line of a file test.txt

$ cut -c2 test.txt
As seen above, the characters a, p, s are the second character from each line of the test.txt file.

2. Select Column of Characters using Range
Range of characters can also be extracted from a file by specifying start and end position delimited with -. The following example extracts first 3 characters of each line from a file called test.txt

$ cut -c1-3 test.txt
3. Select Column of Characters using either Start or End Position
Either start position or end position can be passed to cut command with -c option.

The following specifies only the start position before the ‘-‘. This example extracts from 3rd character to end of each line from test.txt file.

$ cut -c3- test.txt
t command for file oriented operations.
command for copy files or directories.
command to list out files and directories with its attributes.
The following specifies only the end position after the ‘-‘. This example extracts 8 characters from the beginning of each line from test.txt file.

$ cut -c-8 test.txt
cat comm
cp comma
ls comma
The entire line would get printed when you don’t specify a number before or after the ‘-‘ as shown below.

$ cut -c- test.txt
cat command for file oriented operations.
cp command for copy files or directories.
ls command to list out files and directories with its attributes.
4. Select a Specific Field from a File
Instead of selecting x number of characters, if you like to extract a whole field, you can combine option -f and -d. The option -f specifies which field you want to extract, and the option -d specifies what is the field delimiter that is used in the input file.

The following example displays only first field of each lines from /etc/passwd file using the field delimiter : (colon). In this case, the 1st field is the username. The file

$ cut -d':' -f1 /etc/passwd
5. Select Multiple Fields from a File
You can also extract more than one fields from a file or stdout. Below example displays username and home directory of users who has the login shell as “/bin/bash”.

$ grep "/bin/bash" /etc/passwd | cut -d':' -f1,6
To display the range of fields specify start field and end field as shown below. In this example, we are selecting field 1 through 4, 6 and 7

$ grep "/bin/bash" /etc/passwd | cut -d':' -f1-4,6,7
6. Select Fields Only When a Line Contains the Delimiter
In our /etc/passwd example, if you pass a different delimiter other than : (colon), cut will just display the whole line.

In the following example, we’ve specified the delimiter as | (pipe), and cut command simply displays the whole line, even when it doesn’t find any line that has | (pipe) as delimiter.

$ grep "/bin/bash" /etc/passwd | cut -d'|' -f1
But, it is possible to filter and display only the lines that contains the specified delimiter using -s option.

The following example doesn’t display any output, as the cut command didn’t find any lines that has | (pipe) as delimiter in the /etc/passwd file.

$ grep "/bin/bash" /etc/passwd | cut -d'|' -s -f1
7. Select All Fields Except the Specified Fields
In order to complement the selection field list use option –complement.

The following example displays all the fields from /etc/passwd file except field 7

$ grep "/bin/bash" /etc/passwd | cut -d':' --complement -s -f7
8. Change Output Delimiter for Display
By default the output delimiter is same as input delimiter that we specify in the cut -d option.

To change the output delimiter use the option –output-delimiter as shown below. In this example, the input delimiter is : (colon), but the output delimiter is # (hash).

$ grep "/bin/bash" /etc/passwd | cut -d':' -s -f1,6,7 --output-delimiter='#'
9. Change Output Delimiter to Newline
In this example, each and every field of the cut command output is displayed in a separate line. We still used –output-delimiter, but the value is $’\n’ which indicates that we should add a newline as the output delimiter.

$ grep bala /etc/passwd | cut -d':' -f1,6,7 --output-delimiter=$'\n'
10. Combine Cut with Other Unix Command Output
The power of cut command can be realized when you combine it with the stdout of some other Unix command.

Once you master the basic usage of cut command that we’ve explained above, you can wisely use cut command to solve lot of your text manipulation requirements.

The following example indicates how you can extract only useful information from the ps command output. We also showed how we’ve filtered the output of ps command using grep and sed before the final output was given to cut command. Here, we’ve used cut option -d and -f which we’ve explained in the above examples.

$ ps axu | grep python | sed 's/\s\+/ /g' | cut -d' ' -f2,11-
2231 /usr/bin/python /usr/lib/unity-lens-video/unity-lens-video
2311 /usr/bin/python /usr/lib/unity-scope-video-remote/unity-scope-video-remote
2414 /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon
2463 /usr/bin/python /usr/lib/system-service/system-service-d
3274 grep --color=auto python

* ```tac```
About tac
Concatenate and print files in reverse.

tac (which is "cat" backwards) concatenates each FILE to standard output just like the cat command, but in reverse: line-by-line, printing the last line first. This is useful (for instance) for examining a chronological log file in which the last line of the file contains the most recent information.

If no FILE is specified, or if the FILE is specified as "-", tac reverses the contents of standard input.

tac syntax
tac [OPTION] ... [FILE] ...
-b, --before attach the line separator before each line of output instead of after.
-r, --regex interpret the line separator as a regular expression (useful with the -s option, see below).
-s, --seperator=STRING use STRING as the line separator instead of a newline.
--help display command help and exit.
--version output version information and exit.
tac examples
tac file1.txt
Prints the lines of file1.txt in reverse, from last line to first.

Related commands
cat — Output the contents of a file.
tail — Print the last lines of a textfile.
* ```curl```
cURL is a command-line tool for getting or sending files using URL syntax.

Since cURL uses libcurl, it supports a range of common Internet protocols, currently including HTTP, HTTPS, FTP, FTPS, SCP, SFTP, TFTP, LDAP, DAP, DICT, TELNET, FILE, IMAP, POP3, SMTP and RTSP (the last four only in versions newer than 7.20.0 or 9 February 2010).

cURL supports HTTPS and performs SSL certificate verification by default when a secure protocol is specified such as HTTPS. When cURL connects to a remote server via HTTPS, it will first obtain the remote server certificate and check against its CA certificate store the validity of the remote server to ensure the remote server is the one it claims to be. Some cURL packages have bundled with CA certificate store file. There are several options to specify CA certificate such as --cacert and --capath. The --cacert option can be used to specify the location of the CA certificate store file. In the Windows platform, if a CA certificate file is not specified, cURL will look for a CA certificate file name “curl-ca-bundle.crt” in the following order:

Directory where the cURL program is located.
Current working directory.
Windows system directory.
Windows directory.
Directories specified in the %PATH% environment variables.[7]
cURL will return an error message if the remote server is using a self-signed certificate, or if the remote server certificate is not signed by a CA listed in the CA cert file. -k or --insecure option can be used to skip certificate verification. Alternatively, if the remote server is trusted, the remote server CA certificate can be added to the CA certificate store file.

* ```wget```

curl vs Wget
The main differences as I (Daniel Stenberg) see them. Please consider my bias towards curl since after all, curl is my baby - but I contribute to Wget as well.

Please let me know if you have other thoughts or comments on this document.

File issues or pull-requests if you find problems or have improvements.

What both commands do
both are command line tools that can download contents from FTP, HTTP and HTTPS
both can send HTTP POST requests
both support HTTP cookies
both are designed to work without user interaction, like from within scripts
both are fully open source and free software
both projects were started in the 90s
both support metalink
How they differ
library. curl is powered by libcurl - a cross-platform library with a stable API that can be used by each and everyone. This difference is major since it creates a completely different attitude on how to do things internally. It is also slightly harder to make a library than a "mere" command line tool.

pipes. curl works more like the traditional unix cat command, it sends more stuff to stdout, and reads more from stdin in a "everything is a pipe" manner. Wget is more like cp, using the same analogue.

Single shot. curl is basically made to do single-shot transfers of data. It transfers just the URLs that the user specifies, and does not contain any recursive downloading logic nor any sort of HTML parser.

More protocols. curl supports FTP, FTPS, Gopher, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMB/CIFS, SMTP, RTMP and RTSP. Wget only supports HTTP, HTTPS and FTP.

More portable. curl builds and runs on lots of more platforms than wget. For example: OS/400, TPF and other more "exotic" platforms that aren't straight-forward unix clones.

More SSL libraries and SSL support. curl can be built with one out of eleven (11!) different SSL/TLS libraries, and it offers more control and wider support for protocol details.

HTTP auth. curl supports more HTTP authentication methods, especially over HTTP proxies: Basic, Digest, NTLM and Negotiate

SOCKS. curl supports several SOCKS protocol versions for proxy access

Bidirectional. curl offers upload and sending capabilities. Wget only offers plain HTTP POST support.

HTTP multipart/form-data sending, which allows users to do HTTP "upload" and in general emulate browsers and do HTTP automation to a wider extent

curl supports gzip and deflate Content-Encoding and does automatic decompression

curl offers and performs decompression of Transfer-Encoded HTTP, wget doesn't

curl supports HTTP/2 and it does dual-stack connects using Happy Eyeballs

Much more developer activity. While this can be debated, I consider three metrics here: mailing list activity, source code commit frequency and release frequency. Anyone following these two projects can see that the curl project has a lot higher pace in all these areas, and it has been so for 10+ years. Compare on openhub

Wget is command line only. There's no library.

Recursive! Wget's major strong side compared to curl is its ability to download recursively, or even just download everything that is referred to from a remote resource, be it a HTML page or a FTP directory listing.

Older. Wget has traces back to 1995, while curl can be tracked back no earlier than the end of 1996.

GPL. Wget is 100% GPL v3. curl is MIT licensed.

GNU. Wget is part of the GNU project and all copyrights are assigned to FSF. The curl project is entirely stand-alone and independent with no organization parenting at all with almost all copyrights owned by Daniel.

Wget requires no extra options to simply download a remote URL to a local file, while curl requires -o or -O.

Wget supports only GnuTLS or OpenSSL for SSL/TLS support

Wget supports only Basic auth as the only auth type over HTTP proxy

Wget has no SOCKS support

Its ability to recover from a prematurely broken transfer and continue downloading has no counterpart in curl.

Wget enables more features by default: cookies, redirect-following, time stamping from the remote resource etc. With curl most of those features need to be explicitly enabled.

Wget can be typed in using only the left hand on a qwerty keyboard!

Additional Stuff
Some have argued that I should compare uploading capabilities with wput, but that's a separate tool/project and I don't include that in this comparison.

Two other capable tools with similar feature set include aria2 and axel (dead project?) - try them out!

For a stricter feature by feature comparison (that also compares other similar tools), see the curl comparison table

Feedback and improvements by: Micah Cowan, Olemis Lang * ```watch```

watch - execute a program periodically, showing output fullscreen
watch [-dhvt] [-n <seconds>] [--differences[=cumulative]] [--help] [--interval=<seconds>] [--no-title] [--version] <command>

watch runs command repeatedly, displaying its output (the first screenfull). This allows you to watch the program output change over time. By default, the program is run every 2 seconds; use -n or --interval to specify a different interval.
The -d or --differences flag will highlight the differences between successive updates. The --cumulative option makes highlighting "sticky", presenting a running display of all positions that have ever changed. The -t or --no-title option turns off the header showing the interval, command, and current time at the top of the display, as well as the following blank line.

watch will run until interrupted.

Note that command is given to "sh -c" which means that you may need to use extra quoting to get the desired effect.
Note that POSIX option processing is used (i.e., option processing stops at the first non-option argument). This means that flags after command don't get interpreted by watch itself.

To watch for mail, you might do

watch -n 60 from
To watch the contents of a directory change, you could use

watch -d ls -l
If you're only interested in files owned by user joe, you might use

watch -d 'ls -l | fgrep joe'
To see the effects of quoting, try these out

watch echo $$
watch echo '$$'
watch echo "'"'$$'"'"
You can watch for your administrator to install the latest kernel with

watch uname -r
(Just kidding.)
* ```head```

About head

head makes it easy to output the first part of files.


head, by default, prints the first 10 lines of each FILE to standard output. With more than one FILE, it precedes each set of output with a header identifying the file name. If no FILE is specified, or when FILE is specified as a dash ("-"), head reads from standard input.

head syntax

head [OPTION]... [FILE]...


-c--bytes=[-]num Print the first num bytes of each file; with a leading '-', print all but the last num bytes of each file.
-n--lines=[-]num Print the first num lines instead of the first 10; with the leading '-', print all but the last num lines of each file.
-q--quiet--silent Never print headers identifying file names.
-v--verbose Always print headers identifying file names.
--help Display a help message and exit.
--version Output version information and exit.

In the above options, num may have a multiplier suffix:

b 512
kB 1000
K 1024
MB 1000*1000
M 1024*1024
GB 1000*1000*1000
G 1024*1024*1024

...and so on for TPEZY.

head examples

head myfile.txt

Display the first ten lines of myfile.txt.

head -15 myfile.txt

Display the first fifteen lines of myfile.txt.

head myfile.txt myfile2.txt

Display the first ten lines of both myfile.txt and myfile2.txt, with a header before each that indicates the file name.

head -n 5 myfile.txt myfile2.txt

Displays only the first 5 lines of both files.

head -c 20 myfile.txt

Will output only the first twenty bytes (characters) of myfile.txtNewlines count as a single character, so if head prints out a newline, it will count it as a byte.

head -n 5K myfile.txt

Displays the first 5,000 lines of myfile.txt.

head -c 6M myfile.txt

Displays the first six megabytes.

head -

If a dash is specified for the file name, head reads from standard input rather than a regular file.

head myfile.txt myfile2.txt -

Display the first ten lines of myfile.txtmyfile2.txt, and standard input.

head -n 4 *.txt

Display the first four lines of every file in the working directory whose file name ends in the extension .txt.

head -n 4 -q *.txt

Same as the previous command, but uses quiet (-q) output, which will not print a header before the lines of each file.

cat — Output the contents of a file.
more — Display text one screen at a time.
pg — Browse page by page through text files.
tail — Print the last lines of a text file.

* ```tail```

About tail

tail outputs the last part, or "tail", of files. It can also monitor new information written to the file in real time, displaying the newest entries in a system log, for example.


tail [{-c |--bytes=}num] [-f] [--follow[={name|descriptor}]] 
     [-F] [{-n |--lines=}num] [--max-unchanged-stats[=num]] 
     [--pid=pid] [{-p|--quiet|--silent}] [--retry] 
     [{-s |--sleep-interval=}num] [{-v|--verbose}] [file ...]
tail --help
tail --version


By default, tail prints the last 10 lines of each file to standard output. If you specify more than one file, each set of output is prefixed with a header showing the file name.

If no file is specified, or if file is a dash ("-"), tail reads from standard input.


Option Description
-c [+]num,
Output the last num bytes of each file.

You can also use a plus sign before num to output everything starting at byte num. For instance, -c +1 will print everything.

A multiplier suffix can be used after num to specify units: b(512), kB (1000), K (1024), MB (1000*1000), M (1024*1024), GB (1000*1000*1000), G (1024*1024*1024), and so on for T(terabyte), P (petabyte), E (exabyte), Z (zettabyte), Y(yottabyte).
This option will cause tail will loop forever, checking for new data at the end of the file(s). When new data appears, it will be printed.

If you follow more than one file, a header will be printed to indicate which file's data is being printed.

If the file shrinks instead of grows, tail will let you know with a message.

If you specify name, the file with that name is followed, regardless of its file descriptor.

If you specify descriptor, the same file is followed, even if it is renamed. This is the default behavior.
-F "Follow and retry". Same as using --follow=name --retry.
-n num,
Output the last num lines, instead of the default (10).

If you put a plus sign before numtail will output all lines beginning with that line. For example, -n +1 will print every line.
--max-unchanged-stats=num If you are following a file with -f or --follow=nametailcontinuously checks the file to see if its size has changed. If the size has changed, it reopens the file and looks for new data to print. The --max-unchanged-stats option reopens a file, even if its size has not changed, after every num checks.

This option is useful if the file might be spontaneously unlinked or renamed, such as when log files are automatically rotated.
--pid=pid When following with -f or --follow, terminate operation after process ID pid dies.
Never output headers.
--retry Keep trying to open a file even if it is temporarily inaccessible; useful with the --follow=name option.
-s num,
When following with -f or --followsleep for approximately numseconds between file checks. With --pid=pid, check process pidat least once every num seconds.
Always print headers.
--help Display a help message, and exit.
--version Display version information, and exit.


tail myfile.txt

Outputs the last 10 lines of the file myfile.txt.

tail -n 100 myfile.txt

Outputs the last 100 lines of the file myfile.txt.

tail -f myfile.txt

Outputs the last 10 lines of myfile.txt, and monitors myfile.txt for updates; tail then continues to output any new lines that are added to myfile.txt.

Tip: tail will follow the file forever. To stop it, press CTRL + C.

tail -f access.log | grep

This is a useful example of using tail and grep to selectively monitor a log file in real time.

In this command, tail monitors the file access.log. It pipes access.log's final ten lines, and any new lines added, to the grep utilitygrep reads the output from tail, and outputs only those lines which contain the IP address