scripting-basics/sam_run.sh
#!/bin/bash/
samtools stats $1 > $2
In your shell (either on your machine or in binder), make sure you’re in the bash_bioinfo_scripts/scripting-basics/
folder:
cd scripting-basics/
xargs
or for
loopsBash scripting is often referred to as a useful “glue language” on the internet. Although a lot of functionality can be covered by both JavaScript and Python, bash scripting is still very helpful to know.
We are going to cover Bash scripting because it is the main shell that is available to us on DNAnexus machines, which are Ubuntu-based.
We will be using Bash scripts as “glue” for multiple applications in cloud computing, including:
dx run
on a app such as Swiss Army KnifeAs you can see, knowing Bash is extremely helpful when running jobs on the cloud.
Say we have samtools
installed on our own machine. Let’s start with a basic script and build from there. We’ll call it sam_run.sh
. With nano
, a text editor, we’ll start a very basic bash script and build its capabilities out.
scripting-basics/sam_run.sh
#!/bin/bash/
samtools stats $1 > $2
Let’s take a look at the command that we’re running first. We’re going to run samtools stats
, which will give us statistics on an incoming bam
or sam
file and save it in a file. We want to be able to run our script like this:
bash sam_run my_file.bam out_stats.txt
When we run it like that, sam_run.sh
will run samtools stat
like this:
samtools stats my_file.bam > out_stats.txt
So what’s going on here is that there is some substitution using common arguments. Let’s look at these.
$1
How did the script know where to substitute each of our arguments? It has to do with the argument variables. Arguments (terms that follow our command) are indexed starting with the number 1. We can access the value at the first position using the special variable $1
.
Note that this works even in quotes.
So, to unpack our script, we are substituting our first argument for the $1
, and our second argument for the $2
in our script.
How would we rewrite sam_run.sh
if we wanted to specify the output file as the first argument and the bam file as the second argument?
scripting-basics/sam_run.sh
#!/bin/bash/
samtools stats $2 > $1
For this script, we would switch the positions of $1
and $2
.
#!/bin/bash/
samtools stats $2 > $1
And we would run sam_run.sh
like this:
bash sam_run.sh my_file.bam out_stats.txt
See Section 11.3 for more info.
We will need to use pipes to chain our commands together. Specifically, we need to take a command that generates a list of files on the platform, and then spawns individual jobs to process each file. For this reason, understanding a little bit more about how pipes (|
) work in Bash is helpful.
If we want to understand how to chain our scripts together into a pipeline, it is helpful to know about the different streams that are available to the utilities.
Every script has three streams available to it: Standard In (STDIN), Standard Out (STDOUT), and Standard Error (STDERR) (Figure 3.1).
STDIN contains information that is directed to the input of a script (usually text output via STDOUT from another script).
Why do these matter? To work in a Unix pipeline, a script must be able to utilize STDIN, and generate STDOUT, and STDERR.
Specifically, in pipelines, STDOUT of a script (here it’s run_samtools
) is directed into STDIN of another command (here wc
, or word count)
We will mostly use STDOUT in our bash scripts, but STDERR can be really helpful in debugging what’s going wrong.
We’ll use pipes and pipelines not only in starting a bunch of jobs using batch scripting on our home computer, but also when we are processing files within a job.
https://swcarpentry.github.io/shell-novice/04-pipefilter/index.html https://datascienceatthecommandline.com/2e/chapter-2-getting-started.html?q=stdin#combining-command-line-tools
xargs
A really common pattern is taking a delimited list of files and doing something with them. We can do some useful things such as seeing the first few lines of a set of files, or doing some sort of processing with the set of jobs.
Let’s start out with a list of files:
source ~/.bashrc #| hide_line
ls data/*.sh
data/batch-on-worker.sh
data/dx-find-data-class.sh
data/dx-find-data-field.sh
data/dx-find-data-name.sh
data/dx-find-path.sh
data/dx-find-xargs.sh
Now we have a list of files, let’s look at the first few lines of each of them, and print a separator ---
for each.
scripting-basics/xargs_example.sh
source ~/.bashrc #| hide_line
ls data/*.sh | xargs -I% sh -c 'head %; echo "\n---\n"'
#!/bash/bin
cmd_to_run="ls *.vcf.gz | xargs -I% sh -c "bcftools stats % > %.stats.txt"
dx run swiss-army-knife \
-iin="data/chr1.vcf.gz" \
-iin="data/chr2.vcf.gz" \
-iin="data/chr3.vcf.gz" \
-icmd=${cmd_to_run}
---
#!/bin/bash
dx find data --class file --brief
---
dx find data --property field_id=23148 --brief
---
dx find data --name "*.bam" --brief
---
Let’s take this apart piece by piece.
xargs
takes an -I
argument that specifies a placeholder. In our case, we are using %
as our placeholder in this statement.
We’re passing on each filename from ls
into the following code:
sh -c 'head %; echo "---\n"'
The sh -c
opens a subshell so that we can execute our command for each of the files in our list. We’re using sh -c
to run:
'head %; echo "---\n"'
So for our first file, 01-scripting-basics.qmd
, we are substituting that for %
in our command:
'head 01-scripting-basics.qmd; echo "---\n"'
For our second file, cloud-computing-basics.qmd
, we would substitute that for the %
:
'head cloud-computing-basics.qmd; echo "---\n"'
Until we cycle through all of the files in our list.
xargs
patternAs you cycle through lists of files, keep in mind this basic pattern (Figure 3.3):
ls <wildcard> | xargs -I% sh -c "<command to run> %"
We will leverage this pattern when we get to batch processing files (Chapter 7).
How would we modify the below code to do the following?
.json
files in our data/
folder using ls
tail
instead of head
ls *.txt | xargs -I% sh -c "head %; echo '---\n'"
ls data/*.json | xargs -I% sh -c "tail %; echo '---\n'"
We’ll use this to execute batch jobs using dx run
. This especially becomes powerful on the platform when we use dx find files
to list files in our DNAnexus project.
We’ve already encountered a placeholder variable, %
, that we used in running xargs
. Let’s talk about declaring variables in bash scripts and using them using variable expansion.
In Bash, we can declare a variable by using <variable_name>=<value>
. Note there are no spaces between the variable (my_variable
), equals sign, and the value ("ggplot2"
).
my_variable="ggplot2"
echo "My favorite R package is ${my_variable}"
My favorite R package is ggplot2
Take a look at line 3 above. We expand the variable (that is, we substitute the actual variable) by using ${my_variable}
in our echo
statement.
In general, when expanding a variable in a quoted string, it is better to use ${my_variable}
(the variable name in curly brackets). This is especially important when using the variable name as part of a string:
my_var="chr1"
echo "${my_var}_1.vcf.gz"
chr1_1.vcf.gz
If we didn’t use the braces here, like this:
echo "$my_var_1.vcf.gz"
Bash would look for the variable $my_var_1
, which doesn’t exist. So use the curly braces {}
when you expand variables. It’s safer overall.
There is an alternate method for variable expansion which we will use when we call a sub-shell - a shell within a shell, much like in our xargs
command above. We need to use parentheses ()
to expand them within the sub-shell, but not the top-shell. We’ll use this when we process multiple files within a single worker.
On the DNAnexus platform, there are special helper variables that are available to us, such as $in_prefix
, which will give us the prefix of a file name input. This is essential when running commands within an app, as it allows us to generalize inputs and outputs in our app.
Here’s a practical example:
my_cmd="papermill notebook.ipynb output_notebook.ipynb"
dx run dxjupyterlab -icmd="${my_cmd}" -iin="notebook.ipynb"
We’re storing the command we want to run in the ${my_cmd}
variable. Here we’re mostly using it to break up the dx run
statement on the next line.
basename
can be very handy when on workersIf we are processing a bunch of files on a worker that we have specified using dxFUSE
(Section 6.8), we need a way to get the bare filename from a dxfuse
path. We will take advantage of this when we run process multiple files on the worker.
For example
basename /mnt/project/worker_scripts/dx-run-script.sh
This will return:
dx-run-script.sh
Which can be really handy when we name our outputs.
One point of confusion is when do you quote things in Bash? When do you use single quotes ('
) versus double-quotes ("
)? When do you use \
to escape characters?
Let’s talk about some quoting rules in Bash. I’ve tried to make things as simplified and generalized as possible, rather than stating all of the rules for each quote.
"chr 1.bam"
)"ted's file.bam"
)Let’s go over each of these with an example.
For example, if your filename is chr 1 file.bam
, then use double quotes in your argument
samtools view -c "chr 1 file.bam"
Say you have a file called ted's new file.bam
. This can be a problem when you are calling it, especially because of the single quote.
In this case, you can do this:
samtools view -c "ted's new file.bam"
There are a number of special characters (such as Tab, and Newline) that can be specified as escape characters. In double quotes, characters such as $
are signals to Bash to expand or evaluate code.
Say that someone had a $
in their file name such as Thi$file is money.bam
How do we refer to it? We can escape the character with a backslash \
:
samtools view -c "Thi\$file is money.bam"
The backslash is a clue to Bash that we don’t want variable expansion in this case. Without it, bash would look for a variable called $file
.
This is rarely used, but if you need to keep an escaped character in your filename, you can use single quotes. Say we have a filename called Thi\$file.bam
and you need that backslash in the file name (btw, please don’t do this), you can use single quotes to preserve that backslash:
samtools view -c 'Thi\$file.bam'
Again, hopefully you won’t need this.
https://www.grymoire.com/Unix/Quote.html#uh-3
Backticks (`
) are an old way to do command evaluation in Bash. For example, if we run the following on the command-line:
echo "there are `ls -l | wc -l` files in this directory"
Will produce:
there are 36 files in this directory
Their use is deprecated, so you should be using $()
in your command evaluations instead:
echo "there are $(ls -l | wc -l) files in this directory"
There are a lot of rules for Bash variable expansion and quoting that I don’t cover here. I try to show you a way to do things that work in multiple situations on the platform.
That’s why I focus on double quotes for filenames and ${}
for variable expansion in general. They will work whether your Bash script is on the command line or in an App, or in WDL.
Let’s end this chapter with wrapping R scripts in a Bash script. Say you have an R Script you need to run on the command line. In our bash script, we can do the following:
scripting-basics/wrap_r_script.sh
#!/bin/bash
Rscript my_script.R CSVFILE="${1}"
This calls Rscript
, which is the command line executable, to run our R script. Note that we have a named argument called CSVFILE
and it is done differently than in Bash - how do we use this in our R Script?
We can pass arguments from our bash script to our R script by using commandArgs()
- this will populate a list of named arguments (such as CSVFILE
) that are passed into the R Script. We assign the output of commandArgs()
into the args
object.
We refer to our CSVFILE
argument as args$CSVFILE
in our script.
scripting-basics/r_script.R
library(tidyverse)
<- commandArgs()
args # Use arg$CSVFILE in read.csv
<- read.csv(file=args$CSVFILE)
csv_file
# Do some work with csv_file
<- csv_file |> dplyr::filter()
csv_filtered
# Write output
write.csv(csv_filtered, file = paste0(args$CSVFILE, "_filtered.csv"))
Now that we’ve set it up, we can run the R script from the command line as follows:
bash my_bash_script.sh my_csvfile.csv
In our bash script, my_bash_script.sh
, we’re using positional argument (for simplicity) to specify our csvfile, and then passing the positional argument to named ones (CSVFILE
) for my_r_script.R
.
We’ll see when we build apps that our executable scripts need to be written as bash scripts. This means that if we want to run R code, we need to wrap it in a bash script.
Whew, this was a whirlwind tour. Keep this chapter in mind when you’re working with the platform - the bash programming patterns will serve you well. We’ll refer to these patterns a lot when we get to doing more bioinformatics tasks on the platform.
xargs