I’d argue that the strengths of the shell are that it lives outside the purpose of any particular program. It lets you program interaction with the operating system, the programs you have installed, and the file system. It is also the programming language that is probably used most often interactively, usings its read-eval-print loop (REPL).
Shells do support arithmetic, but that’s not its strong suit. Doug McIlroy said this:
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.
Where it shines is in string manipulation, file name manipulation in particular. In zshell, appending :r after a variable strips off the extension, giving the base name. :t strips off the parent directory. :h strips off the file name from the parent directory’s path. I can write a script that converts a bunch of images with a certain extension to another with a script like this:
src=$1
dst=$2
for i in *.$src; do
convert $i ${i:r}.$dst
done
Yes, one can either put general, reusable code in its own file, or inside a function. We saw last lecture, however, that shells also have a notion of scope. When we ran our script that tried to return us to a remembered directory, it didn’t have the effect we wanted. That’s because the working directory was changed for the process that the shell spawned to run our script. The shell’s process and its working directory weren’t affected. Sometimes we don’t want such strict walls between scopes. In the shell we have a few options: source or aliases.
alias here='pwd > ~/.where'
alias there='cd $(cat ~/.where)'
Yes, through positional parameters. Unlike Java and C, however, nobody outside the program checks to see if all the parameters were properly passed. For return values, we can only return integers through a process’s exit status.
However, the data that we output from a shell script can often be parsed by the caller. It is here where the distinction between printing and returning that we stressed in CS1 turns to rubbish.
Normally, the command generating the error fails, but execution picks up at the next command. This is different from our Java experience. In C, things generally didn’t grind to a halt except when we encountered a segfault or ran out of memory. Execution continued on. It’d be nice to force a shell script to stop at the first error, wouldn’t it? We can set an option to achieve that:
# set -e # uncomment this to bail at the first error
cp ~/missing.file found
date >> found
If we have time, we’ll have a peek at some Ruby scripts to do the following:
Calculate an aspect ratio from dimensions given as command line parameters.
Generate a list of words that rhyme with a word given as a command-line parameter.