...making Linux just a little more fun!

<-- prev | next -->

Bash Shell and Beyond

By Anonymous

Introduction

This article is a continuation of a series in Issues 108 and 109 in which I discuss some of my additions to the standard Linux shell. In my previous article in Issue 109, I promised to cover dynamically-loadable builtins related to arrays, regex splitting, plus interfacing to external libraries like SQL databases and an XML parser.

Regex Match

Modeled after the Awk match() function, I added a new match builtin for regex(3) matching.

    match [-23] string regex [submatch]

It returns success (0) if 'string' contains 'regex' pattern. If the 'submatch' array variable is specified, then by default, it will contain all matching substrings corresponding to the entire 'regex' and any parenthesized groups in 'regex'. E.g.

    match Aabc123Z '([a-z]+)([0-9]+)' a         # a=(abc123 abc 123)

where 'abc123' matches the entire 'regex', 'abc' matches the first group '([a-z])', and '123' matches the second group '([0-9]+)'.

For the -2 option, 'submatch' will contain 2 elements, non-matching preamble and leftover postamble (ie. before and after the 'regex'). For -3 option, 'submatch' will contain 3 elements, the preamble, the matching string, and the postamble. E.g.

    match -2 Aabc123Z '([a-z]+)([0-9]+)' a      # a=(A Z)
    match -3 Aabc123Z '([a-z]+)([0-9]+)' a      # a=(A abc123 Z)

where 'A' and 'Z' are the string segments before and after the 'regex', respectively.

You now have 3 different ways of doing regex matching:

  1. [[ string =~ regex ]] conditional test in standard Bash-3.0, which uses BASH_REMATCH as the array variable,
  2. the new extended 'case' statement, which uses SUBMATCH as the array variable, and
  3. match builtin command, where you can specify the array variable and what it should contain.

Stack and Queue

Quite often, you need to implement a "stack" or "queue" data structure. In shell, you can use positional parameters or an array to hold the data, e.g.

    set -- {a--z}
    set -- $@ Z                 # append to queue
    set -- A $@                 # push to stack
    set -- $2 $1 ${@:3}         # swap first 2 items in stack
    shift 2                     # pop 2 items off the stack
    set -- ${@|:-5:} ${@|::-5}  # rotate queue to the right by 5
    set -- ${@|:5:} ${@|::5}    # rotate queue to the left by 5

This is acceptable for a throw-away script, but is very inefficient because of all the copying of data back and forth.

Here are builtin implementations of stack and queue operations. They directly manipulate positional parameters or arrays (with -a option), in-place without copying the data. They are fast and suitable for general purpose "toolbox" work.

pp_pop [-a array] [n]

Deletes N (default 1) positional parameters or array elements. Same as 'shift' builtin for positional parameters, except that it will pop items if possible. It returns error if the parameter or array is empty.

pp_push [-a array] arg...

Inserts arguments at the beginning of positional parameters or array. E.g.

    set -- 1 2 3
    pp_push a b c
    echo $*             # a b c 1 2 3

pp_append [-a array] arg...

Appends arguments at the end of positional parameters or array. E.g.

    set -- 1 2 3
    pp_append a b c
    echo $*             # 1 2 3 a b c

pp_swap [-a array]

Swaps the first 2 parameters (ie. $1, $2) or array elements. It returns error if the parameter or array does not have at least 2 items to swap.

pp_set [-a array] arg...

Sets the argument(s) as new positional parameters or array. Equivalent to

    set arg...
    set -A array arg...         # from Ksh

pp_overwrite [-a array] arg...

Overwrite the parameter(s) in-place. For an array, this is equivalent to

    set +A array arg...         # from Ksh

E.g.

    set -- 1 2 3 4 5 6
    pp_overwrite a b c
    echo $*             # a b c 4 5 6

pp_rotateleft [-a array] [n]

Rotate N (default 1) positional parameters or array elements to the left.

pp_rotateright [-a array] [n]

Rotate N (default 1) positional parameters or array elements to the right.

pp_flip [-a array]

Flip the order of positional parameters or array elements. E.g.

    set -- {a--z}
    pp_flip
    echo $*             # z y x ... a

The above example can be rewritten as,

    set -- {a--z}
    pp_append Z         # append to queue
    pp_push A           # push to stack
    pp_swap             # swap first 2 items in stack
    pp_pop 2            # pop 2 items off the stack
    pp_rotateright 5    # rotate queue to the right by 5
    pp_rotateleft 5     # rotate queue to the left by 5

Transpose and Sort

Transpose and sort problems come up a lot when dealing with tables. Although there are utilities such as awk(1), and sort(1) to handle these functions, in order to use them you have to pipe the data (or write a file) to the external program, then read the program's output back and re-parse it to collect the re-ordered data. For well-behaved line-oriented text data this is possible, but it is much better to have a dedicated shell solution, especially when you have the data already parsed and simply want to re-order it.

pp_transpose [-a array] n

Transpose positional parameters or array representing matrix ordered by rows into a sequence that is ordered by columns. N is the size of row. For example, given a sequence (1 2 3 4 a b c d), representing 2x4 array with 2 rows (1 2 3 4) and (a b c d),

    | 1 2 3 4 |         | 1 a |
    | a b c d |   ==>   | 2 b |
                        | 3 4 |
                        | 4 d |

the transposed sequence is (1 a 2 b 3 c 4 d), representing 4x2 array with 4 rows (1 a), (2 b), (3 c), and (4 d).

    set -- 1 2 3 4 a b c d
    pp_transpose 4
    echo $*             # 1 a 2 b 3 c 4 d

    pp_transpose 2      # back to original sequence

An equivalent solution in pure shell would go (very slowly) like

    set -- 1 2 3 4 a b c d
    eval set -- $(
        for i in `seq 4`; do 
            for j in `seq $i 4 $#`; do 
                echo '"${'$j'}"'
            done
        done
    )
    echo $*             # 1 a 2 b 3 c 4 d

pp_sort [-a array]

Sort positional parameters or array in ascending order. If the array is integer type, then numerical sorting is done, e.g.

    a=( {10..1} )
        pp_sort -a a
        echo ${a[*]}            # 1 10 2 3 ... 9 (string sort)
    declare -i a
        pp_sort -a a
        echo ${a[*]}            # 1 2 3 ... 9 10 (integer sort)

Array Operations

Array cat

arraycat [-a array] a [b ...]

Prints array elements, one array at a time. If the -a option is given, then it appends the data to the 'array' variable instead. This is similar to

    printf '%s\n' "${a[@]}" "${b[@]}}" ...
    array=( "${a[@]}" "${b[@]}}" ... )

except that you're using variable references like the strcat() and strcpy() builtins discussed in the previous articles.

Array map

In Python (and some other functional languages), you can apply a function to each element of array without manually looping through. If there are 2 or more arrays, then elements are taken from all of the arrays in parallel. I've added a shell version of the Python map() function:

arraymap command a [b ...]

Run 'command' with arguments taken from array elements in parallel. It should take as many positional parameters as there are arrays. This is equivalent to

    command "${a[0]}" "${b[0]}" ...
    command "${a[1]}" "${b[1]}" ...
    ...
    command "${a[N]}" "${b[N]}" ...

where N is the maximum of all indexes. Array elements are referenced by index, not by the order of storage. So, there can be empty parameters.

E.g.

    unset a b;  a=(1 2 3)  b=(4 5 6)
    func () { echo $1$2; }
    arraymap func a b           # join in parallel: 14 25 36

    func () { echo $(($1 + $2)); }
    arraymap func a b           # add in parallel: 5 7 9

Array zip and unzip

The names come from the workings of a zipper. You start with two rows of teeth; and, when you zip-up, you get one row of interleaved teeth. Consider arrays x=(x1 x2 x3 ... xn) and y=(y1 y2 y3 ... yn). Zipping produces a single array xy=(x1 y1 x2 y2 x3 y3 ... xn yn) which consists of interleaved elements of 'x' and 'y' arrays. Of course, unzipping does the reverse.

       y1    y2    y3 ... yn   ==>   x1 y1 x2 y2 x3 y3 ... xn yn
    x1    x2    x3 ... xn

Here are 2 new builtins to "zip" and "unzip" directly within Bash shell.

arrayzip [-a array] name ...

Print array elements, one by one, going across the arrays in parallel. If -a option is given, then append to the array variable instead. Array elements are referenced by index, not by the order of storage, so there can be empty parameters. This is shell version of Python zip() function, and is equivalent to

    arraymap 'printf "%s\n"' name ...
    arraymap 'pp_append -a array' name ...

arrayunzip -a array name...

Inverse of 'arrayzip'. Sequentially appends items from 'array' into 'name' array variables, moving across one row at a time. Output variables are flushed first. If there are not enough input items, then the null (empty) string is appended to the leftover variables.

For example,

    x=(1 2 3 4)  y=(a b c d)
    arrayzip -a xy x y
    declare -p xy               # xy=(1 a 2 b 3 c 4 d)

    unset x y
    arrayunzip -a xy x y
    declare -p x y              # back to original

You can also use array commands to extract rows or columns in a transposition problem. E.g.

    row1=(1 2 3 4)  row2=(a b c d)
    arraycat -a table row{1..2}
    arrayunzip -a table col{1..4}
    declare -p col{1..4}        # (1 a), (2 b), (3 c), (4 d)

Putting Items into an Array

array [-gG glob] [-iInN a:b] [-jspq string] [-evwrR regex]
[-EVfc command] name arg...

Given a list of items on the command-line, this new builtin appends the selected items into an array variable. It is designed to be called repeatedly, so you should create or flush the array variable beforehand. Its many options control how and what items to select.

Content filtering

The following options are command-line versions of parameter expansion ${var|...}.

-f filter Append 'arg', only if 'filter arg' returns success (0). Otherwise, skip to next 'arg'.

-c command Append the stdout of command substitution `command arg`, only if there is an output. Otherwise, skip to next 'arg'.

-i a:b Extract Python-style [a:b] substring from each 'arg', ie. arg[a:b], arg[a:b], ...

-I a:b Complement of -i, ie. [:a] + [b:]

-n a:b Extract Python-style [a:b] range from 'arg' sequence, ie. [arg,arg,...][a:b]

-N a:b Complement of -n, ie. [:a] + [b:]

-g glob Append 'arg' matching 'glob' pattern.

-r regex Append 'arg' matching 'regex' pattern.

-G glob Complement of -g.

-R regex Complement of -r.

There are minor differences between the above mechanism and standard parameter expansion. -i option extracts a substring from each item, and the -n option extracts a subrange from the argument list. Options -I and -N selects the inverse of -i and -n, respectively, which are not available in ${var|...}.

String join and split

Joining and splitting strings are very common operations. In Python, you have string.join() and string.split(). Now, you can do them in Bash also.

-j sep

Join all 'arg' with 'sep' separator, and append the resulting string. E.g.

    a=()                # 'unset a' if 'a' already exists.
    array -j '.'  a  11 22 33 44
    array -j '---'  a  abc 123
    declare -p a                # a=(11.22.33.44 abc---123)

-s sep

Split 'arg' by 'sep' separator, and append each segment to the array. If 'sep' is null, then each char itself becomes an entry. E.g.

    a=()
    array -s '.'  a  11.22.33.44
    array -s '---'  a  abc---123
    declare -p a                # a=(11 22 33 44 abc 123)

-p begin

-q end

Extract strings which are enclosed by 'begin' and 'end' delimiters from 'arg'. Append both matching (excluding the delimiters) and non-matching string segments to the array sequentially. If both 'begin' and 'end' are null or if one option is missing, then splitting is not done. E.g.

    a=()
    array -p 'abc' -q 'xyz'  a  abc123xyz789
    declare -p a                # a=(123 789)

You can call the command repeatedly, and the results are appended to the end of array variable.

Regex split

Practically, all modern scripting languages can split string on regex pattern, or replace the matching segment using callback function. Now, so can Bash, and more.

-e regex

Extract 'regex' patterns from 'arg', and append each matching string. (think egrep -e) E.g.

    unset a;  a=()
    array -e '[a-z]+'  a  abc123xyz789
    declare -p a                # a=(abc xyz)

-v regex

Remove 'regex' patterns from 'arg' strings, and append each non-matching string. Matching strings are skipped, like IFS whitespace. (think egrep -v). This option is analogous to Awk split() or Python re.split(), in that you're left with non-matching segments. E.g.

    array -v '[a-z]+'  a  abc123xyz789
    declare -p a                # a=(... 123 789)

-w regex

Similar to -e and -v option, but both matching and non-matching strings are sequentially added, so that joining the array with null (empty) string will give back the original data.

    array -w '[a-z]+'  a  abc123xyz789
    declare -p a                # a=(... abc 123 xyz 789)

You can specify regex(7) patterns with the -evw options above. Unlike the -s option, null segments are not appended, since they are rarely useful in regex splitting. If the 'nocaseglob' shell option is set, then regex matching is case-insensitive, just like glob matching.

Callback function and substitution

So far, we are chopping up the command-line items and collecting the pieces. You can also transform the pieces using a callback command and use the result instead of the original content, just like ${var|command} or -c command option. However, if you collect the matching segments and the non-matching segments separately, you lose the relative order of those segments. What is needed is to apply the callback command to each item just before appending the item to the array variable.

-E command
For each matching string, append `command matching [group...]` to the array. The command line consists of the matching string and all parenthesized groups (if any). For the -p and -q options, command substitution `command inside` will be called where 'inside' is matching segment without the delimiters.

-V command
For each non-matching string, append `command non-matching` to the array.

The-EV options are independent and take effect only if -evwpq options are specified. 'command' can be any command you can type on your command line. This is a generalized form of regex substitution.

For example, to increment numbers by 1 and capitalize non-numbers,

    a=()
    addone () { echo $(($1 + 1)); }             # add 1
    upper () { tr 'a-z' 'A-Z' <<< "$1"; }       # to uppercase
    array -w '[0-9]+' -E addone -V upper  a  abc123xyz789
    declare -p a                # a=(ABC 124 XYZ 790)

HTML Template (BAsh Server Pages)

If you can embed Python, Perl, PHP, Java, or VisualBasic within HTML file, then there is no reason why you can't embed shell script and process the HTML file through shell. In fact, I've done exactly that. Here is a new builtin to process template strings with embedded shell script.

basp [-p begin -q end] text...
Extract embedded shell scripts which are enclosed within '<%...%>' delimiters (non-greedy, non-nesting) from text arguments. Run the scripts at top level, not as command substitution, and send the output, along with surrounding texts, to stdout. If there is error, it returns immediately. If

-p and
-q
options are given, then 'begin' and 'end' are used as delimiters, instead of '<%' and '%>'.

This is shell's answer to PHP, JSP, ASP, and the likes, so I named it basp (BAsh Server Pages). It is only 70 lines of C, and its main advantage is that you don't have to learn another scripting language and syntax. You can continue to use shell which has been around for 30 years. E.g.

    tag=x
    basp '<html> <% printf "<$tag>%s</$tag> " 1 2 3 %> </html>'
           # <html> <x>1</x> <x>2</x> <x>3</x>  </html>

If you have HTML template in a file, then just read it into a string like

    basp "`< file.html`"

Because they are running at top level, embedded code-blocks share data and environment with each other and with the main shell session. If you want to isolate the main session, run it in a subshell.

A more complicated example might be to get a list of items, then print a table with 10 consecutive items per row. The template file.html will look like

    <table>
    <%
        set -- {1..40}
        for i in `seq 1 10 $#`; do
            cat << EOF
    <tr> `printf '<td>%s</td> ' ${*:i:10}` </tr>
    EOF
        done
    %>
    </table>

Then,

    basp "`< file.html`"

will produce a 4x10 table which renders to

    1  2  3  4  5  6  7  8  9  10
    11 12 13 14 15 16 17 18 19 20
    21 22 23 24 25 26 27 28 29 30
    31 32 33 34 35 36 37 38 39 40

You can implement the HTML template using the array builtin from above. Essentially, you extract the script that is between the '<%...%>' delimiters and run it through eval, and print non-script to stdout unchanged. So, it would go something like

    a=()
    array -p '<%' -q '%>' -E eval -V echo  a  "`< file.html`"
    arraycat a

But, although it works for the example above, you are limited by the fact that each command substitution is a separate process and can't share data with other code-blocks. So, if you put 'set -- {1..50}' in another code-block, then it won't work. Besides,

    basp "`< file.html`"

is less typing.

[Editor's Note: The security ramifications of this are left as an exercise for the reader. Think chroot jail, at a minimum. -- Dave ]

Expat XML parser

I've added a simple interface to the Expat XML parser, so that you can register callback functions and interact with the XML parser from the shell. This new builtin will be enabled only if you have Expat installed. If you don't, then you will need to download/compile/install Expat, and recompile Bash shell (starting with ./configure).

xml [-sedicnm command] text...

This is the interface to Expat-1.95.8 (from www.libexpat.org) library. Arguments are fed to the Expat XML parser sequentially. It returns 1 immediately on any error. If all arguments are processed without error, then the builtin returns success (0). The argument must be a single complete XML document, because Expat can handle only one XML document per parser process.

The parser will invoke the callback commands or handlers that you specify, with all required parameters on the command-line. The callbacks will run at the top level, so if you need to protect your shell environment, run the 'xml' command in subshell. For the moment, the following options are recognized:

-s command start element (Usage: command tag att=value ... ).

The attribute name and value strings are concatenated with '=', so that 'declare' or 'local' can be used to set shell variables with the same names as attributes, ie.

    declare "$2"        # set the first attribute name
    declare "${@:2}"    # set all attribute names

-e command end element (Usage: command tag )

-d command character data (Usage: command data )

-i command processing instruction (Usage: command target data )

-c command comment (Usage: command text )

-n command namespace start (Usage: command prefix uri )

-m command namespace end (Usage: command prefix )

For convenience, the name and attributes of start XML elements are saved in array variable XML_ELEMENT_STACK as a stack, ie.

XML_ELEMENT_STACK[0] = number of positional parameters (ie. $#)

XML_ELEMENT_STACK[1] = tag (ie. $1)

XML_ELEMENT_STACK[2] = the first attribute 'key=value' (ie. $2) ...

and the depth of current XML element is stored in shell variable XML_ELEMENT_DEPTH. They will be removed and decreased, respectively, at the end of XML element. Essentially, this is equivalent to doing manually

    pp_push -a XML_ELEMENT_STACK  $# "$@"
    ((XML_ELEMENT_DEPTH++))

at the start of element, and

    pp_pop -a XML_ELEMENT_STACK  $((XML_ELEMENT_STACK[0] + 1))
    ((XML_ELEMENT_DEPTH--))

at the end of element.

Example

To illustrate how it works, consider the following XML sample:

    <root>
        <one a="AA" b="BB">
            first line
            <two x="XX"/>
            second line
        </one>
    </root>
  1. When <root> element is encountered, it will set

        XML_ELEMENT_STACK=(1 root)
        XML_ELEMENT_DEPTH=1
    

    and call command registered with -s option with 'root' as the argument,

        command root
    
  2. On encountering <one> element, it will push '3', 'one', 'a=AA', and 'b=BB' onto XML_ELEMENT_STACK and increment XML_ELEMENT_DEPTH, so that they become

        XML_ELEMENT_STACK=(3 one a=AA b=BB 1 root)
        XML_ELEMENT_DEPTH=2
    

    Also, it will call the -s callback with the tag and attributes, like

        command one a=AA b=BB
    
  3. Similarly, on encountering <two> element, it will push '2', 'two', 'x=XX' onto XML_ELEMENT_STACK and increment XML_ELEMENT_DEPTH, which become

        XML_ELEMENT_STACK=(2 two x=XX 3 one a=AA b=BB 1 root)
        XML_ELEMENT_DEPTH=3
    

    and call the -s callback, like

        command two x=XX
    

    Since this tag has implicit </two> element, it will immediately call command registered with -e option with 'two' as the argument,

        command two
    

    Then, it will pop the current tag and attributes off XML_ELEMENT_STACK and decrement XML_ELEMENT_DEPTH. Now, they return to the state they were in before entering 'two' element, ie.

        XML_ELEMENT_STACK=(3 one a=AA b=BB 1 root)
        XML_ELEMENT_DEPTH=2
    
  4. On encountering </one> element, it will call -e callback,

        command one
    

    and pop the tag and attributes off XML_ELEMENT_STACK and decrement XML_ELEMENT_DEPTH, so that they become

        XML_ELEMENT_STACK=(1 root)
        XML_ELEMENT_DEPTH=1
    
  5. Finally, for </root> element, it will call -e callback,

        command root
    

    and pop the current tag off XML_ELEMENT_STACK and decrement XML_ELEMENT_DEPTH, returning to their initial state.

  6. For data such as 'first line' and 'second line', the command registered with -d option will be called with the data as argument. Multiple calls are made, if data are multi-line, contains special character encodings, or broken up by another elements. It is the user's responsibility to collect these data segments. Here, strcat would come handy.

Because XML_ELEMENT_STACK is a stack holding the command-line arguments for all nested elements, you can check it to find out where you are.

In any callback command, the command-line arguments used at the start of current element are

    arg=( "${XML_ELEMENT_STACK[@]:0:XML_ELEMENT_STACK[0]+1}" )

which consists of $# ${arg[0]}, the tag name ${arg[1]}, and the attribute names and values ${arg[*]:2} (if any). Similarly, the command-line arguments used for the immediate parent element are

    n=${XML_ELEMENT_STACK[0]}
    arg=( "${XML_ELEMENT_STACK[@]:n+1:XML_ELEMENT_STACK[n+1]+1}" )

An easier way would be to rotate the stack, assuming XML_ELEMENT_DEPTH is deep enough to allow rotation, e.g.

    n=${XML_ELEMENT_STACK[0]}
    pp_rotateleft -a XML_ELEMENT_STACK  $((n+1))
    arg=( "${XML_ELEMENT_STACK[@]:0:XML_ELEMENT_STACK[0]+1}" )
    pp_rotateright -a XML_ELEMENT_STACK  $((n+1))

To get a list of all nested tag names, you simply filter out stack items containing '=' (attribute) or all integers ($#). From inside of <two> element in the above example,

    XML_ELEMENT_STACK=(2 two x=XX 3 one a=AA b=BB 1 root)
    echo ${XML_ELEMENT_STACK[*]|~=|^[0-9]+$}            # two one root

will give you just the tags. This is equivalent to manually looping through, like

    for i in {1..XML_ELEMENT_DEPTH}; do
        echo ${XML_ELEMENT_STACK[1]}
        pp_rotateleft -a XML_ELEMENT_STACK $((XML_ELEMENT_STACK[0] + 1))
    done

So, Bash equivalent to 'outline' example from Expat distribution would go like

    indent='  '
    start () {
        echo "${indent|*XML_ELEMENT_DEPTH-1}$*"
    }
    xml -s start "`< file.xml`"

producing

    root
      one a=AA b=BB
        two x=XX

GDBM and Associative Arrays

For some reason, Bash doesn't have a key/value data structure (called associative array, hash, or dictionary in other scripting languages.) I've added a wrapper for gdbm(3) with a full set of operations to create and manipulate disk-based associative arrays.

gdbm [-euikvr] [-KVW array] file [key | key value ...]

Typical usage would be as follows:

gdbm file print all key/\t/value pairs, ie. dict.items()

gdbm -k file print all keys, ie. dict.keys()

gdbm -v file print all values, ie. dict.values()

gdbm file key print var[key], ie. ${var[key]}

gdbm -r file reorganize database

gdbm -K array file save all keys into array

gdbm -V array file save all values into array

gdbm -W array file save all key/value pairs into array sequentially

gdbm file key value store key/value, ie. var[key]=value

gdbm -i file key value store key/value, only if key is new

gdbm -v file key name store value in variable, ie. name=${var[key]}

gdbm -e file test if file is GDBM database

gdbm -e file key test if key exists

gdbm -e file key value test if key exists and value is var[key]

gdbm -u file key delete key, ie. unset var[key]

gdbm -u file key value delete key, only if value is var[key]

More than one key/value pair can be specified on the command line, and all arguments will be processed even if there is an error. This speeds up data entry, because each 'gdbm' call opens and closes the database file. If the last value is missing (ie. there is an odd number of arguments,) then the last key will be ignored.

For example,

    gdbm file.db a 111 b 222 c 333

    gdbm file.db a              # 111
    gdbm file.db b              # 222
    gdbm file.db c              # 333

    gdbm -k file.db             # c a b
    gdbm -v file.db             # 333 111 222

    gdbm -v file.db a x b y c z
    declare -p x y z            # x=111 y=222 z=333

    gdbm -e file.db a                   # does 'a' exist?
    gdbm -e file.db a 111 b 222         # is a==111 and b==222 ?

There are many benefits to this approach:

  1. the database is a single file which can be copied,

  2. the data survives exit and reboot,

  3. other processes can access the database,

  4. the shell can now handle a database which is bigger than memory.

SQLite, MySQL, and PostgreSQL

Each database comes with its own command-line client program (ie. 'sqlite', 'mysql', and 'psql'). Athough it is easy to send SQL statements to the database manager, it can be difficult to bring query results back into the shell. You have to use stdout or a file, read the table, and parse the rows and the columns. This is non-trivial for anything but simple data.

I've added a simple interface to SQLite, MySQL, and PostgreSQL:

Lsql [-a array] -d file
SQL...

Msql [-a array] [-h host -p port
-d dbname -u user -P password ] SQL...

Psql [-a array] [-h host -p port -d dbname -u user -P
password ] SQL...

where Lsql is for SQLite, Msql is for MySQL, and Psql is for PostgreSQL. Of course, if you don't have a database, then you won't be able to use the corresponding builtin.

They all work pretty much the same way. They send SQL statements to the database engine. If there is any query result, they print to stdout, or (with the -a option) save the data fields into an array variable, row by row. My intention is not to replace the client programs, but to make shell script easier to write. For example, here is the tutorial example in the SQLite documentation:

    Lsql -d file.sqlite \
        "CREATE TABLE tbl1(one VARCHAR(10), two SMALLINT)" \
        "INSERT INTO tbl1 VALUES('hello!',10)" \
        "INSERT INTO tbl1 VALUES('goodbye', 20)"        # use 'set +H'

creates a simple table and loads in 2 rows of data. To query it,

    Lsql -d file.sqlite "SELECT * FROM tbl1"    # to stdout

    Lsql -a table -d file.sqlite "SELECT * FROM tbl1"
    declare -p table            # table=(hello! 10 goodbye 20)

The first will print

    hello!  10
    goodbye 20

and the second will put the data into array variable 'table'.

Summary

This ends this tutorial on my patches to Bash-3.0 shell. Bash shell is ideal tool for teaching/learning about Linux and programming, because it is so easy to write C extensions and put shell handles on them. It is my sincere hope that readers will stick with shell a little longer before moving on to other scripting languages. :-)

 


Copyright © 2005, Anonymous. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 110 of Linux Gazette, January 2005

<-- prev | next -->
Tux