Chapter 1. What is the shell?

  • date
  • cal
  • df: see the amount of free space on disk drives.
  • free: see the amount of free memory.
  • exit (or C-d): end the terminal session.

Chapter 2. Navigation

  • pwd
  • ls
  • cd
  • Relative and absolut path names.

Chapter 3. Exploring the system

  • Commands are often followed by one or more options and arguments.
    command -shortOptions arguments
    command --longOptions arguments
        
  • file
    $gp file bar.sh
    bar.sh: Bourne-Again shell script, ASCII text executable
        
  • less
  • symbolic links (aka soft links, aka symlinks).

Chapter 4. Manipulating files and directories

  • cp
  • mv
  • mkdir
  • rm
  • ls

Chapter 5. Working with commands

  • With `command’ we can mean four different things:
    1. An executable program;
    2. A command built into the shell itself (shell builtins);
    3. A shell function (incorporated into the environment);
    4. An alias.
  • type tells you the the kind of a command.
  • which tells you the exact location of an executable.
  • help
  • --help
  • man
  • apropos
  • whatis
  • info
  • alias / unalias

Chapter 6. Redirection

Normally: keyboard (stdin) –> program —> screen (stdout, stderr)

I/O redirection allows us to change that.

Redirecting Standard Output

Redirecting stdout:

ls -l /usr/bin > ls-output.txt

Redirecting stdout (appending):

ls -l /usr/bin >> ls-output.txt

Redirecting Standard Error

Redirecting stderr:

ls -l /bin/usr 2> ls-error.txt

Why that ‘2’? That’s the number of the file descriptor of the standard error. We are using a shell notation that allows us to use that number. To redirect the standard output we could have said ‘1>’ instead of ‘>’.

How to redirecting both stdout and stderr to one file? There are two ways. The good old way:

ls -l /usr/bin > ls-output.txt 2>&1

Newer way:

ls -l /usr/bin &> ls-output.txt

Appending both stdout and stderr to one file:

ls -l /usr/bin &>> ls-output.txt

Shotts does not say what the old way of doing this would be. Is it the following?

ls -l /usr/bin >> ls-output.txt 2>>&1

Also: Shotts does not say it, but I gather that the way to redirect — mutatis mutandis for appending — stdout to one fine and stderr to a different file is:

ls -l /usr/bin > ls-output.txt 2> ls-error.txt

Throw Output Away

To disregard an output we can use the special /dev/null file (so-called bit bucket):

ls -l /bin/usr 2> /dev/null

Redirecting Standard Input

cat reads one or more files and copies them to stdout.

However, if cat is given not arguments, it reads from stdin.

This feature can be used to write short files:

  $  cat > foo.txt
  hello there.
CTRL-D

Given that cat accepts stanrdard input, we can redirect it:

cat < file-with-some-text.txt

This gives the same result of the result we got when passing the file nume as an argument (“cat file-with-some-text.txt”).

Piping

ls -l /usr/bin | less
ls /bin /usr/bin | sort | less

`>’ and `|’ should not be confused. `>’ (redirection operator) “connects a command with a file”. `|’ (pipeline operator) connects the output of one command with the input of another command. In particular, it should be rememberd that the redirection operator silently creates and overwrites files… that can have destructive effects.

uniq

uniq removes duplicates

ls /bin /usr/bin | sort | uniq | less

If we want to see the list of duplicates:

ls /bin /usr/bin | sort | uniq -d | less

wc

ls /bin /usr/bin | sort | uniq | wc -l

grep

ls /bin /usr/bin | sort | uniq | grep zip

head/tail

head -n 5 ls-output.txt
tail -n 5 ls-output.txt

tail can be used in pipelines:

ls /usr/bin | tail -n 5

tail can be used to view files in real time:

tail -f /var/log/messages

tee

tee is a very useful command. It reads from stdin and copies it to both standard output and to one or more files:

ls /usr/bin | tee ls.txt | grep zip

Chapter 7. Seeing the world as the shell sees it

Expansion and quoting are arguably the most important subjects to learn about the shell.

  • Expansion
    • Pathname Expansion
      • echo D*
      • echo /usr/*/share
    • Tilde Expansion
      • echo ~
      • echo ~foo
    • Arithmetic Expansion
      • echo $((2 + 2))
      • echo $(($((5**2)) * 3)) or echo $(((5**2) * 3))
    • Brace Expansion
      • echo Front-{A,B,C}-Back –> Front-A-Back Front-B-Back Front-C-Back
      • echo Number_{1...5} –> Number_1 Number_2 Number_3 Number_4 Number_5
      • mkdir {2007..2009}-{01..12}
    • Parameter Expansion
      • echo $USER
    • Command Substitution
      • echo $(ls)
      • ls -l $(which cp)
      • file $(ls -d /usr/bin/* | grep zip)
      • Instead of dollar sign and parenthesis you can use an older syntax: backquotes. E.g.: ls -l `which cp`
  • Quoting
    • Double quotes: within them all special characters lose their special meaning, except for `$’, `' and “’. That is, parameter expansion, arithmetic expansion and command substitution still work.
      • ~ls -l “two words.txt”~
      • ~echo “$USER $((2+2)) $(cal)”~
      • Can you tell why echo $(cal) and ~echo “$(cal)”~ do different things? … (p.69)
    • single quotes: all expansion is suppressed.
    • Escaping
      • ~echo “The balance for user $USER is: \$5.00”~
      • ~sleep 10; echo -e “Time’s up\a”~
      • ~sleep 10; echo -e Time’s up $’\a’~

Chapter 8. Advanced Keyboard Tricks

[…]

Chapter 9. Permissions

  • Unix is a multiuser system.
  • id
  • Access rights are defined in terms of
    • read access;
    • write access;
    • execution access.
  • chmod
    • Octal number representation
      • chmod 600 foo.txt
    • Symbolic representation
  • umask

Chapter 10. Processes

  • Modern operating systems are (usually) multitasking: they create the illusion of doing more than one thing at once. They do so by rapidly switching from one executing program to another. This is achieved through the kernel’s management of processes. Processes are how Linux organizes the different programs waiting for their turn at the CPU.
  • When a system starts up the kernls starts some processes and launches the program init. init runs a series of scripts (so-called init scripts, located in /etc), many of which are daemon programs.
  • parent process –> child process.
  • Each process is assigned a number called PID (process ID). PIDs are assigned in ascending order. init’s PID is always 1.
  • Like files, processes have owners and user IDs, etc.

Viewing Process

  • ps

E.g.:

[user]$ ls
  PID  TTY          TIME CMD
 25621 pts/6    00:00:00 bash
101436 pts/6    00:00:00 ps
  • ps, by default, does not show very much… it shows only the processes associated with the current terminal session.
  • TTY: the controlling terminal for the process.
  • TIME: the amount of CPU time consumed by the process.
  • If we add the option x, we tell ps to show also the process that are not controlled by the terminal we are currently using.
    [user]$ ps x
    PID TTY      STAT   TIME COMMAND
    474 ?        Ss     0:00 /usr/lib/systemd/systemd --user
    475 ?        S      0:00 (sd-pam)
    482 tty1     Ss     0:00 -bash
    504 tty1     S+     0:00 xinit -- vt01
    505 tty1     Sl    94:57 /usr/lib/Xorg :0 vt01
    510 tty1     Sl    47:30 emacs -l /home/gp/.emacs.d/init-exwm.el
    513 ?        Ssl    0:23 emacs --daemon
        
  • ? means no controlling terminal.
  • STAT: the current status of the project.
  • Process states:
    • R: running;
    • S: sleeping; waiting for an event;
    • D: Uninterruptible sleep; waiting for I/O;
    • T: Stopped;
    • Z: A defunct of “zombie” process; terminated by not cleaned up by its parent;
    • <: A high-priority process. A process with high priority is said to be “less nice” because it takes more of the CPU’s time.
    • N: A low-priority process (a nice process). A process that gets processor time only after other processes with higher priority have been serviced.
  • To seeing more info:
    ps aux
        
    • We can now see:
      • USER: User ID, the owner of the process;
      • %CPU: CPU usage;
      • %MEM: Memory usage;
      • VSZ: Virtual memory size;
      • RSS: Resident set size — the amount of RAM the process is using in kilobytes;
      • START: Time when the process started.

Viewing processes dynamically

top shows a system summary and a table of processes sorted by CPU activity.

Controlling Processes

xlogos is a program that shows a resizable window cotaining an X. We can start it with the xlogo command. Now, notice that the shell prompt has not returned. The shell is waiting for the program to finish. Close the xlogo window and you’ll see that the prompt will return. We can interrupt xlogo also with Ctrl-c. If you start xlogo and then press Ctrl-c the programm will be interrupted. The xlogo window closes and the prompt returns.

However, we can get the prompt back without terminating xlogo: xlogo &. If the command is followd by an ampersand, then the program is placed in the background. The shell, before returning the prompt, will print some numbers to tell us the job number and his PID. This is part of a shell feature called job control.

[foo@bar]$ xlogo &
[1] 103880
[foo@bar]$

If we now run ps we get:

PID TTY          TIME CMD
101522 pts/8    00:00:00 bash
103880 pts/8    00:00:00 xlogo
103917 pts/8    00:00:00 ps

With jobs we can see the list of jobs that have been launched from our terminal:

[1]+  Running                 xlogo &

We can make xlogo return to the foreground: fg %1.

We can stop a process with Ctrl-z (the program will thereby go in the background). The program is not running (try to resize the xlogo window.)

Once we have stopped a process we can continue its execution in the foreground by using fg, or resume its execution in the background with bg.

Signals

kill can be used to terminate a process.

You can pass to kill the PID but also the jobspec (for example, %1).

More precisely, kill does really “kill” a process; it sends a signal to it. Signals one of the way in which the operating system communicates with programs.

Actually we have already used signals when we have used Ctrl-c and Ctrl-z. In those cases, the terminal receives the keystrokes and sends a signal to the program in the foreground. In the case of Ctrl-C, INT (interrupt) is sent. In the case of Ctrl-z, TSTP (terminal stop) is sent. Programs listen for signals and act upon signals to do things.

The most common syntax of kill is:

kill -signal PID...

If no signal is specified, then TERM (terminate) is sent by default.

  • Common signals:
    • HUP …
    • INT …
    • KILL …
    • TERM …
    • CONT …
    • STOP …
    • TSTP …
    • QUIT
    • SEGV
    • WINCH

killing multiple processes:

killall [-u user] [-signal] name…

  • Some other important commands:
    • pstree
    • vmstat
    • xload
    • tload

Chapter 11. The Environment

  • Environment variables
  • Shell variables
  • printenv: show environment variables.
  • printenv USER: print the value of the specific variable USER.
  • set: show both environment variables and shell variables, as well as any define shell functions.
  • echo $USER: print the value of the specific variable USER.
  • alias: show aliases.
  • When we log on to the system, bash starts and reads a series of configuration scripts called startup files, which define the environment for all users. After that other startup files are read in the home directory which define the specific enviroment of the user.
  • The exactly sequence of what is executed by bash depends on the kind of shell session being started. There are two types of shell sessions:
    • Login shell session
      • Startup files:
        • /etc/profile
        • ~~/.bash_profile~
        • ~~/.bash_login~
        • ~~/.profile~
    • Non-login shell session.
      • Startup files:
        • /etc/bash.bashrc
        • ~~/.bashrc~
  • As a general rule, to add directories to the PATH or define other environment variables, one changes the .bash_profile file (or equivalent file. Ubuntu uses .profile). The rest goes in the .bashrc file.

Chapter 24. Writing Your First Script

The shell is somewhat is both a command line interface to the system and a scripting language interpreter.

Most of the things that can be done on the command line can be done in scripts. Most of the things that can be done in scripts can be done on the command line.

Save hello_world in ~/bin/:

#!/bin/bash

echo 'Hello world'

Make the script executable:

chmod 755 hello_world

Now you can execute it:

./hello_world

If you want to execute it just by typing hello_world, then it must be in one folder included by the PATH variable. To print the value of the PATH variable:

echo $PATH

Usually ~/bin is included. If it’s not, let’s include it, so we can start putting our scripts in it, and executing them just by typing their name. ~/bin is a good place for script intended for personal use.

~/usr/local/bin is a standard place for scripts that are intended for everyone to use.

System administrators often put their scripts in /usr/bin/sbin.

“Locally supplied software” is usually in the /usr/local hierarchy.

Chapter 25. Starting a Project

sys_info_page file:

#!/bin/bash

# Program to output a system information page

echo "<html>"
echo "  <head>"
echo "    <title>Page Title</title>"
echo "  </head>"
echo "  <body>"
echo "    Page body."
echo "  </body>"
echo "</html>"

We can run it and redirect the output to a file so that we can see the result in a web browser:

sys_info_page > sys_info_page.html

Given that quoted strings can contain new lines — try on the command line; it works —, we can combine the echo commands into one:

  #!/bin/bash

  # Program to output a system information page

  echo "<html>
        <head>
          <title>Page Title</title>
        </head>
        <body>
          Page body.
        </body>
</html>"

Let’s use a variable and use its value:

#!/bin/bash

# Program to output a system information page

title="System Information Report"

echo "<html>
<head>
    <title>$title</title>
</head>

<body>
    <h1>$title</h1>
</body>
</html>"

A command like echo $foo undergoes parameter expansion. If the value of the variable foo is the string “bar”, then the command will expand to echo bar. Empty variable expand to nothing.

A common convention is writing constants in uppercase and normal variables in lowercase.

#!/bin/bash

# Program to output a system information page

TITLE="System Information Report For $HOSTNAME"

echo "<html>
<head>
    <title>$TITLE</title>
</head>
<body>
    <h1>$TITLE</h1>
</body>
</html>"

Examples of variable assignment:

a=z                  # Assign the string "z" to variable a.

b="a string"         # Embedded spaces must be within quotes.

c="a string and $b"  # Other expansions such as variables can be
                     # expanded into the assignment.

d="$(ls -l foo.txt)" # Results of a command.

e=$((5 * 7))         # Arithmetic expansion.

f="\t\ta string\n"   # Escape sequences such as tabs and newlines.

g="${filename}1"     # the trailing 1 is not taken as part of the
                     # variable

Let’s add some data to our program:

#!/bin/bash

# Program to output a system information page

TITLE="System Information Report For $HOSTNAME"
CURRENT_TIME="$(date +"%x %r %Z")"
TIMESTAMP="Generated $CURRENT_TIME, by $USER"

echo "<html>
<head>
    <title>$TITLE</title>
</head>
<body>
    <h1>$TITLE</h1>
    <p>$TIMESTAMP</p>
</body>
</html>"

We have seen two ways to output text using echo. There is a third way: here document (or here script). It works like this:

command << token
text
token

command is the name of the command that accepts standard input (which we feed with text) and token is a string used to indicated the end of the embedded text. Let’s use a here document in our script!

#!/bin/bash

# Program to output a system of information page

TITLE="System Information Report For $HOSTNAME"
CURRENT_TIME="$(date +"%x %r %Z")"
TIMESTAMP="Generated $CURRENT_TIME, by $USER"

cat << _EOF_
<html>
     <head>
          <title>$TITLE</title>
     </head>
     <body>
          <h1>$TITLE</h1>
          <p>$TIMESTAMP</p>
     </body>
</html>
_EOF_

Single and double quotes within here documents lose their special meaning to the shell. This can turn out to be handy.

Another example:

#!/bin/bash

# Script to retrieve a file via FTP

FTP_SERVER=ftp.nl.debian.org FTP_PATH=/debian/dists/stretch/main/installer-
amd64/current/images/cdrom REMOTE_FILE=debian-cd_info.tar.gz

ftp -n << _EOF_
open $FTP_SERVER
user anonymous me@linuxbox
cd $FTP_PATH
hash
get $REMOTE_FILE
bye
_EOF_
ls -l "$REMOTE_FILE"

Chapter 26

Top-down design is a common method of designing programs and one that is well suited to shell programming in particular.

Let’s add to our scripts three commands that yet have to be implemented:

#!/bin/bash

# Program to output a system information page

TITLE="System Information Report For $HOSTNAME"
CURRENT_TIME="$(date +"%x %r %Z")"
TIMESTAMP="Generated $CURRENT_TIME, by $USER"

cat << _EOF_
<html>
<head>
    <title>$TITLE</title>
</head>
<body>
    <h1>$TITLE</h1>
    <p>$TIMESTAMP</p>
    $(report_uptime)
    $(report_disk_space)
    $(report_home_space)
</body>
</html>
_EOF_

The gap represented by the missing commands can be filled in two ways: we could write separate scripts — a place them in a directory listed in PATH — or we could write shell functions within our script.

#!/bin/bash

# Program to output a system information page

TITLE="System Information Report For $HOSTNAME"
CURRENT_TIME="$(date +"%x %r %Z")"
TIMESTAMP="Generated $CURRENT_TIME, by $USER"

report_uptime () {
cat <<- _EOF_
        <h2>System Uptime</h2>
        <pre>$(uptime)</pre>
        _EOF_
    return
}

report_disk_space () {
cat <<- _EOF_
        <h2>Disk Space Utilization</h2>
        <pre>$(df -h)</pre>
        _EOF_
return
}

report_home_space () {
cat <<- _EOF_
        <h2>Home Space Utilization</h2>
        <pre>$(du -sh /home/*)</pre>
        _EOF_
return
}
cat << _EOF_
<html>
<head>
<title>$TITLE</title>
</head>
<body>
    <h1>$TITLE</h1>
    <p>$TIMESTAMP</p>
    $(report_uptime)
    $(report_disk_space)
    $(report_home_space)
</body>
</html>
_EOF_

Chapter 27. Flow Control: Branching with if.

x=5

if [ "$x" -eq 5 ]; then
    echo "x equals 5."
else
    echo "x does not equal 5."
fi
if [ "$?" -eq 0 ]; then echo "last command's exit status was 0"; else echo "last command's exit status was not 0. It was $?"; fi

We have just used the command test. Well, we have used a notation that does not actually involves writing `test`. There are two syntaxes. Saying:

test expression

is the same as saying:

[ expression ]

They are both commands. They support a wide range of expressions:

Expressionis true if:
file1 -ef file2file1 and file2 have the same inode numbers
(the two filenames refer to the same file
by hard linking)
file1 -nt file2file1 is newer than file2
file1 -ot file2file1 is older than file2
-b filefile exists and is a block-special (device)
file.
-c filefile exists and is a character-special
(device) file.
-d filefile exists and is a directory
-e filefile exists
-f filefile exists and is a regular file
-g filefile exists and is set-group-ID.
-G filefile exists and is owned by the effective
group ID
-k filefile exists and has its “sticky bit” set
-L filefile exists and is a symbolic link
-O filefile exists and is owned by the effective
user ID
-p filefile exists and is a named pipe
-r filefile exists and is readable (has readable
permission for the effective user)
-s filefile exists and has a length greater than
zero
-S filefile exists and is a network socket
-t fdfd is a file descriptor directed to/from the
terminal. This can be used to determine
whether standard input/output/error is being
redirected
-u filefile exists and is setuid
-w filefile exists and is writable (has write
permission for the effective user)
-x filefile exists and is executable (has
execute/search permission for the effective
user
#!/bin/bash

# test-file: Evaluate the status of a file

FILE=~/.bashrc

if [ -e "$FILE" ]; then
    if [ -f "$FILE" ]; then
        echo "$FILE is a regular file."
    fi
    if [ -d "$FILE" ]; then
        echo "$FILE is a directory."
    fi
    if [ -r "$FILE" ]; then
        echo "$FILE is readable."
    fi
    if [ -w "$FILE" ]; then
        echo "$FILE is writable."
    fi
    if [ -x "$FILE" ]; then
        echo "$FILE is executable/searchable."
    fi
else
    echo "$FILE does not exist"
    exit 1
fi

exit

As a function it would have been:

test_file () {

    # test-file: Evaluate the status of a file

    FILE=~/.bashrc

    if [ -e "$FILE" ]; then
        if [ -f "$FILE" ]; then
            echo "$FILE is a regular file."
        fi
        if [ -d "$FILE" ]; then
            echo "$FILE is a directory."
        fi
        if [ -r "$FILE" ]; then
            echo "$FILE is readable."
        fi
        if [ -w "$FILE" ]; then
            echo "$FILE is writable."
        fi
        if [ -x "$FILE" ]; then
            echo "$FILE is executable/searchable."
        fi
    else
        echo "$FILE does not exist"
        return 1
    fi
}

Expressions to evaluate strings:

ExpressionIs true if:
stringstring is not null
-n stringThe length of string is greater than zero
-z stringThe length of string is zero
string1 = string2string1 and string2 are equal. Single or
string1 == string2double equal signs may be used. The use of
double equal signs is greatly preferred [gp: by whom?],
but is not POSIX compliant.
string != string2string1 and string2 are not equal.
string1 > ~string2string1 sorts after string2.
string1 < string2string1 sorts before string2.

(Careful with > and <: they must be quoted when used with test, otherwise they will be taken as meaning redirection…)

#!/bin/bash

# test-string: evaluate the value of a string

ANSWER=maybe

if [ -z "$ANSWER" ]; then
    echo "There is no answer." >&2
    exit 1
fi

if [ "$ANSWER" = "yes" ]; then
    echo "The answer is YES."
elif [ "$ANSWER" = "no" ]; then
    echo "The answer is NO."
elif [ "$ANSWER" = "maybe" ]; then
    echo "The answer is MAYBE."
else
    echo "The answer is UNKNOWN."
fi

Expressions to evaluate integers:

ExpressionIs true if
integer1 -eq integer2integer1 is equal to integer2
integer1 -ne integer2integer1 is not equal to integer2
integer1 -le integer2integer1 is less than or equal to
integer2
integer1 -lt integer2integer1 is less than integer2
integer1 -ge integer2integer1 is greater than or equal to
integer2
integer1 -gt integer2integer1 is greater than integer2
#!/bin/bash

# test-integer: evaluate the value of an integer.

INT=-5

if [ -z "$INT" ]; then
    echo "INT is empty." >&2
    exit 1
fi

if [ "$INT" -eq 0 ]; then
    echo "INT is zero."
else
    if [ "$INT" -lt 0 ]; then
        echo "INT is negative."
    else
        echo "INT is positive."
    fi
    if [ $((INT % 2)) -eq 0 ]; then
        echo "INT is even."
    else
        echo "INT is odd."
    fi
fi

Moder versions of bash allow for the following syntax:

[[ expression ]]

The [[ ]] command is like test but it add a new string expression:

string1 =~ regex

The above expression returns true if string1 is matched by the regex.

Example of using a regular expression to limit the value of INT to strings that are whole numers only:

#!/bin/bash
# test-integer2: evaluate the value of an integer.
INT=-5
if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if [ "$INT" -eq 0 ]; then
        echo "INT is zero."
    else
        if [ "$INT" -lt 0 ]; then
            echo "INT is negative."
        else
            echo "INT is positive."
        fi
        if [ $((INT % 2)) -eq 0 ]; then
            echo "INT is even."
        else
            echo "INT is odd."
        fi
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi

[[ ]] also makes the == operator support pattern matching the same way pathname expansion does:

[me@linuxbox ~]$ FILE=foo.bar
[me@linuxbox ~]$ if [[ $FILE == foo.* ]]; then
> echo "$FILE matches pattern 'foo.*'"
> fi
foo.bar matches pattern 'foo.*'

Bash also provides the (( )) compound command; useful when operating with integers:

#!/bin/bash

# test-integer2a: evaluate the value of an integer.

INT=-5

if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if ((INT == 0)); then
        echo "INT is zero."
    else
        if ((INT < 0)); then
            echo "INT is negative."
        else
            echo "INT is positive."
        fi
        if (( ((INT % 2)) == 0)); then
            echo "INT is even."
        else
            echo "INT is odd."
        fi
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi

Expressions can be combined by using logical operators:

Operationtest[[]] and (())
AND-a&&
OR-o|\vert
NOT!!

For example:

#!/bin/bash

# test-integer3: determine if an integer is within a
# specified range of values.

MIN_VAL=1
MAX_VAL=100

INT=50

if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if [[ "$INT" -ge "$MIN_VAL" && "$INT" -le "$MAX_VAL" ]]; then
        echo "$INT is within $MIN_VAL to $MAX_VAL."
    else
        echo "$INT is out of range."
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi

We could alternatively have said:

if [ "$INT" -ge "$MIN_VAL" -a "$INT" -le "$MAX_VAL" ]; then
    echo "$INT is within $MIN_VAL to $MAX_VAL."
else
    echo "$INT is out of range."
fi

Another example:

#!/bin/bash

# test-integer4: determine if an integer is outside a
# specified range of values.

MIN_VAL=1
MAX_VAL=100

INT=50

if [[ "$INT" =~ ^-?[0-9]+$ ]]; then
    if [[ ! ("$INT" -ge "$MIN_VAL" && "$INT" -le "$MAX_VAL") ]]; then
        echo "$INT is outside $MIN_VAL to $MAX_VAL."
    else
        echo "$INT is in range."
    fi
else
    echo "INT is not an integer." >&2
    exit 1
fi

Using test would have been like that:

if [ ! \( "$INT" -ge "$MIN_VAL" -a "$INT" -le "$MAX_VAL" \) ]; then
    echo "$INT is outside $MIN_VAL to $MAX_VAL."
else
    echo "$INT is in range."
fi

Remember test is traditional and part of the POSIX specification. [[ ]] is specific to bash. You might prefer one over the other depending on the context.

Chapter 28: Reading Keyboard Input

read (builtin command) is used to read a single line of standard input (so it can be used to read keyboard input as well as a line of data from a file when redirection is used).

This is the syntax:

read [-options] [variable...}
#!/bin/bash

# read-integer: evaluate the value of an integer

echo -n "Please enter an integer -> "
read int

if [[ "$int" =~ ^-?[0-9]+$ ]]; then
    if [ "$int" -eq 0 ]; then
        echo "$int is zero."
    else
        if [ "$int" -lt 0 ]; then
            echo "$int is negative."
        else
            echo "$int is positive."
        fi
        if [ $((int % 2)) -eq 0 ]; then
            echo "$int is even."
        else
            echo "$int is odd."
        fi
    fi
else
    echo "Input value is not an integer." >&2
    exit 1
fi
#!bin/bash

# read-multiple: read multiple values from keyboard

echo -n "Enter one or more values > "
read var1 var2 var3 var4 var5

echo "var1 = '$var1'"
echo "var2 = '$var2'"
echo "var3 = '$var3'"
echo "var4 = '$var4'"
echo "var5 = '$var5'"
#!/bin/bash

# read-single: read multiple values into default variable

echo -n "Enter one or more values > "
read

echo "REPLY = '$REPLY'"
  • read options:
    • - a array: assign the input to array, starting with index zero.
    • -d delimiter: indicate the character used for the end of input.
    • -e: use Readline to handle input.
    • -i string: Use string as a default reply if the user simply presses Enter. It requires the -e option.
    • -n num: Read num characters of input.
    • -p prompt: display custom prompt.
    • -r: raw mode; backlash is not interpreted as an escape.
    • -s: silent mode; no echoing back when typing. Useful for password.
    • -t seconds: timeout
    • -u fd: use input from file descriptor fd, rather than standard input.

Using the -p option:

#!/bin/bash

# read-single: read multiple values into default variable

read -p "Enter one or more values > "

echo "REPLY = '$REPLY'"

Using the -t and -e options:

#!/bin/bash

# read-secret: input a secret passphrase

if read -t 10 -sp "Enter secret passphrase > " secret_pass; then
    echo -e "\nSecret passphrase = '$secret_pass'"
else
    echo -e "\nInput timed out" >&2
    exit 1

To supply a default response using the -e and -i options together:

#!/bin/bash

# read-default: supply a default value if user presses Enter key

read -e -p "What is your user name? " -i $USER
echo "You answered: '$REPLY'"

The one above is the example given by Shotts. But it seems wrong to me. In fact, if one enters an input the answer will be $USER+input.

Here is a working solution:

#!/bin/bash

# read-default: supply a default value if user presses Enter key.

read -p "What is your user name? " username
REPLY=${username:-$USER}
echo "You answered: '$REPLY'"

Reading the content of /etc/passwd by changing the value of IFS (only temporarily) and using a “here strings”:

#!/bin/bash

# read-ifs: read-fields from a file

FILE=/etc/passwd

read -p "Enter a username > " user_name

file_info="$(grep "^$user_name:" $FILE)"

if [ -n "$file_info" ]; then
    IFS=":" read user pw uid gid name home shell <<< "$file_info"
    echo "User = 		'$user'"
    echo "UID = 		'$uid'"
    echo "GID = 		'$GID"
    echo "Full Name = 	'$name"
    echo "Home Dir. = 	'$home"
    echo "Shell = 		'$shell"
else
    echo "No such user '$user_name'" >&2
    exit 1
fi

We are using a here string. A here string is like a here document, but consisting of a single string.

Why aren’t we echoing the content of file_info into read? Well, while the read command normally takes input from standard input, you cannot do the following:

echo "foo" | read

Because pipelines create subshells…(p. 370).

Chapter 29. Flow Control: Looping with while/until

#!/bin/bash

# while-count: display a series of numbers

count=1

while [[ "$count" -le 5 ]]; do
    echo "$count"
    count=$((count + 1))

done
echo "Finished"
continue

break

until creates a loop that continues until it receives a zero exit status. The condition:

while [[ "$count" -le 5 ]]; do...

can then be written in the following way too:

until [[ "$count" -gt 5 ]]; do...

Chapter 30. Troubleshooting

[…]