R scripts

Here goes a little bit of my late experiences with R scripts. Comments, suggestions and/or opinions are welcome.

Usefulness of R scripts

Besides being an amazing interactive tool for data analysis, R software commands can also be executed as scripts. This is useful for example when we need to work in large projects where different parts of the project needs to be implemented using different languages that are later glued together to form the final product.

In addition, it is extremely useful to be able to take advantage of pipeline capabilities of the form

```cat file.txt | preProcessInPython.py | runRmodel.R | formatOutput.sh > output.txt
```

Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. — Doug McIlroy

Basic R script

A basic template for an R script is given by

```#! /usr/bin/env Rscript

# R commands here
```

To start with a simple example, create a file `myscript.R` and include the following code on it:

```#! /usr/bin/env Rscript

x <- 5
print(x)
```

Now go to your terminal and type `chmod +x myscript.R` to give the file execution permission. Then, execute your first script by typing `./myscript.R` on the terminal. You should see

```[1] 5
```

displayed on your terminal since the result is by default directed to `stdout`. We could have written the output of `x` to a file instead, of course. In order to do this just replace the `print(x)` statement by some writing command, as for example

```output <- file("output_file.txt", "w")
write(x, file = output)
close(output)
```

which will write `5` to `output_file.txt`.

Processing command-line arguments

There are different ways to process command-line arguments in R scripts. My favorite so far is to use the getopt package from Allen Day and Trevor L. Davis. Type

```require(devtools)
devtools::install_github("getopt", "trevorld")
```

in an R environment to install it on your machine. To use `getopt` in your R script you need to specify a 4 column matrix with information about the command-line arguments that you want to allow users to specify. Each row in this matrix represent one command-line option. For example, the following script allows the user to specify the output variable using the short flag `-x` or the long flag `--xValue`.

```#! /usr/bin/env Rscript
require("getopt", quietly=TRUE)

spec = matrix(c(
"xValue"   , "x", 1, "double"
), byrow=TRUE, ncol=4)

opt = getopt(spec);

if (is.null(opt\$xValue)) {
x <- 5
} else {
x <- opt\$xValue
}

print(x)
```

As you can see above the `spec` matrix has four columns. The first defines the long flag name `xValue`, the second defines the short flag name `x`, the third defines the type of argument that should follow the flag (0 = no argument, 1 = required argument, 2 = optional argument.), the fourth defines the data type to which the flag argument shall be cast (logical, integer, double, complex, character) and there is a possible 5th column (not used here) that allow you to add a brief description of the purpose of the option. Now our `myscript.R` accepts command line arguments:

```./myscript.R
[1] 5
myscript.R -x 7
[1] 7
myscript.R --xValue 9
[1] 9
```

Verbose mode and stderr

We can also create a verbose flag and direct all verbose comments to `stderr` instead of `stdout`, so that we don’t mix what is the output of the script with what is informative messages from the verbose option. Following is an illustration of a verbose flag implementation.

```#! /usr/bin/env Rscript
require("getopt", quietly=TRUE)

spec = matrix(c(
"xValue" , "x", 1, "double",
"verbose", "v", 0, "logical"
), byrow=TRUE, ncol=4)

opt = getopt(spec);

if (is.null(opt\$xValue)) {
x <- 5
} else {
x <- opt\$xValue
}

if (is.null(opt\$verbose)) {
verbose <- FALSE
} else {
verbose <- opt\$verbose
}

if (verbose) {
write("Verbose going to stderr instead of stdout",
stderr())
}

write(x, file = stdout())
```

We have now two possible flags to specify in our `myscript.R`:

```./myscript.R
5
./myscript.R -x 7
7
./myscript.R -x 7 -v
Verbose going to stderr instead of stdout
7
```

The main difference of directing verbose messages to `stderr` instead of `stdout` appear when we pipe the output to a file. In the code below the verbose message appears on the terminal and the value of `x` goes to the `output_file.txt`, as desired.

```./myscript.R -x 7 -v > output_file.txt
Verbose going to stderr instead of stdout

cat output_file.txt
7
```

stdin in a non-interactive mode

The take fully advantage of the pipeline capabilities that I have mentioned at the beginning of this post, it is useful to accept input from `stdin`. For example, a template of a script that reads one line at a time from `stdin` could be

```input_con  <- file("stdin")
open(input_con)
while (length(oneLine <- readLines(con = input_con,
n = 1,
warn = FALSE)) > 0) {
# do something one line at a time ...
}
close(input_con)
```

Note that when we are running our R scripts from the terminal we are in a non-interactive mode, which means that

```input_con <- stdin()
```

would not work as expected on the template above. As described on the help page for `stdin()`:

stdin() refers to the ‘console’ and not to the C-level ‘stdin’ of the process. The distinction matters in GUI consoles (which may not have an active ‘stdin’, and if they do it may not be connected to console input), and also in embedded applications. If you want access to the C-level file stream ‘stdin’, use file(“stdin”).

And that is the reason I used

```input_con <- file("stdin")
open(input_con)
```

instead. Naturally, we could allow the data to be inputted from `stdin` by default while making a flag available in case the user wants to provide a file path containing the data to be read. Below is a template for this:

```spec = matrix(c(
"data"       , "d" , 1, "character"
), byrow=TRUE, ncol=4);

opt = getopt(spec);

if (is.null(opt\$data)) {
data_file <- "stdin"
} else {
data_file <- opt\$data
}

if (data_file == "stdin"){
input_con  <- file("stdin")
open(input_con)
sep = "\t", stringsAsFactors = FALSE)
close(input_con)
} else {
sep = "\t", stringsAsFactors = FALSE)
}
```

References:

[1] Relevant help pages, as `?Rscript` for example.
[2] Reference manual of the R package getopt.

Near-zero variance predictors. Should we remove them?

Datasets come sometimes with predictors that take an unique value across samples. Such uninformative predictor is more common than you might think. This kind of predictor is not only non-informative, it can break some models you may want to fit to your data (see example below). Even more common is the presence of predictors that are almost constant across samples. One quick and dirty solution is to remove all predictors that satisfy some threshold criterion related to their variance.

Here I discuss this quick solution but point out that this might not be the best approach to use depending on your problem. That is, throwing data away should be avoided, if possible.

It would be nice to know how you deal with this problem.

Zero and near-zero predictors

Constant and almost constant predictors across samples (called zero and near-zero variance predictors in [1], respectively) happens quite often. One reason is because we usually break a categorical variable with many categories into several dummy variables. Hence, when one of the categories have zero observations, it becomes a dummy variable full of zeroes.

To illustrate this, take a look at what happens when we want to apply Linear Discriminant Analysis (LDA) to the German Credit Data.

```require(caret)
data(GermanCredit)

require(MASS)
r = lda(formula = Class ~ ., data = GermanCredit)

Error in lda.default(x, grouping, ...) :
variables 26 44 appear to be constant within groups
```

If we take a closer look at those predictors indicated as problematic by `lda` we see what is the problem. Note that I have added +1 to the index since `lda` does not count the target variable when informing you where the problem is.

```colnames(GermanCredit)[26 + 1]
[1] "Purpose.Vacation"

table(GermanCredit[, 26 + 1])

0
1000

colnames(GermanCredit)[44 + 1]
[1] "Personal.Female.Single"

table(GermanCredit[, 44 + 1])

0
1000
```

Quick and dirty solution: throw data away

As we can see above no loan was taken to pay for a vacation and there is no single female in our dataset. A natural first choice is to remove predictors like those. And this is exactly what the function `nearZeroVar` from the `caret` package does. It not only removes predictors that have one unique value across samples (zero variance predictors), but also removes predictors that have both 1) few unique values relative to the number of samples and 2) large ratio of the frequency of the most common value to the frequency of the second most common value (near-zero variance predictors).

```x = nearZeroVar(GermanCredit, saveMetrics = TRUE)

str(x, vec.len=2)

'data.frame':  62 obs. of  4 variables:
\$ freqRatio    : num  1.03 1 ...
\$ percentUnique: num  3.3 92.1 0.4 0.4 5.3 ...
\$ zeroVar      : logi  FALSE FALSE FALSE ...
\$ nzv          : logi  FALSE FALSE FALSE ...
```

We can see above that if we call the `nearZeroVar` function with the argument `saveMetrics = TRUE` we have access to the frequency ratio and the percentage of unique values for each predictor, as well as flags that indicates if the variables are considered zero variance or near-zero variance predictors. By default, a predictor is classified as near-zero variance if the percentage of unique values in the samples is less than ${10\%}$ and when the frequency ratio mentioned above is greater than 19 (95/5). These default values can be changed by setting the arguments `uniqueCut` and `freqCut`.

We can explore which ones are the zero variance predictors

```x[x[,"zeroVar"] > 0, ]

freqRatio percentUnique zeroVar  nzv
Purpose.Vacation               0           0.1    TRUE TRUE
Personal.Female.Single         0           0.1    TRUE TRUE
```

and which ones are the near-zero variance predictors

```x[x[,"zeroVar"] + x[,"nzv"] > 0, ]

freqRatio percentUnique zeroVar  nzv
ForeignWorker                       26.02703           0.2   FALSE TRUE
CreditHistory.NoCredit.AllPaid      24.00000           0.2   FALSE TRUE
CreditHistory.ThisBank.AllPaid      19.40816           0.2   FALSE TRUE
Purpose.DomesticAppliance           82.33333           0.2   FALSE TRUE
Purpose.Repairs                     44.45455           0.2   FALSE TRUE
Purpose.Vacation                     0.00000           0.1    TRUE TRUE
Purpose.Retraining                 110.11111           0.2   FALSE TRUE
Purpose.Other                       82.33333           0.2   FALSE TRUE
SavingsAccountBonds.gt.1000         19.83333           0.2   FALSE TRUE
Personal.Female.Single               0.00000           0.1    TRUE TRUE
OtherDebtorsGuarantors.CoApplicant  23.39024           0.2   FALSE TRUE
OtherInstallmentPlans.Stores        20.27660           0.2   FALSE TRUE
Job.UnemployedUnskilled             44.45455           0.2   FALSE TRUE
```

Now, should we always remove our near-zero variance predictors? Well, I am not that comfortable with that.

Try not to throw your data away

Think for a moment, the solution above is easy and “solves the problem”, but we are assuming that all those predictors are non-informative, which is not necessarily true, specially for the near-zero variance ones. Those near-variance predictors can in fact turn out to be very informative.

For example, assume that a binary predictor in a classification problem has lots of zeroes and few ones (near-variance predictor). Every time this predictor is equal to one we know exactly what is the class of the target variable, while a value of zero for this predictor can be associated with either one the classes. This is a valuable predictor that would be thrown away by the method above.

This is somewhat related to the separation problem that can happen in logistic regression, where a predictor (or combination of predictors) can perfectly predicts (separate) the data. The common approach not long ago was to exclude those predictors from the analysis, but better solutions were discussed by [2], which proposed a penalized likelihood solution, and [3], that suggested the use of weekly informative priors for the regression coefficients of the logistic model.

Personally, I prefer to use a well designed bayesian model whenever possible, more like the solution provided by [3] for the separation problem mentioned above. One solution for the near-variance predictor is to collect more data, and although this is not always possible, there is a lot of applications where you know you will receive more data from time to time. It is then important to keep in mind that such well designed model would still give you sensible solutions while you still don’t have enough data but would naturally adapt as more data arrives for your application.

References:

[1] Kuhn, M., and Johnson, K. (2013). Applied Predictive Modeling. Springer.
[2] Zorn, C. (2005). A solution to separation in binary response models. Political Analysis, 13(2), 157-170.
[3] Gelman, A., Jakulin, A., Pittau, M.G. and Su, Y.S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 1360-1383.

Character strings in R

This post deals with the basics of character strings in R. My main reference has been Gaston Sanchez‘s ebook [1], which is excellent and you should read it if interested in manipulating text in R. I got the encoding’s section from [2], which is also a nice reference to have nearby. Text analysis will be one topic of interest to this Blog, so expect more posts about it in the near future.

Creating character strings

The class of an object that holds character strings in R is “character”. A string in R can be created using single quotes or double quotes.

```chr = 'this is a string'
chr = "this is a string"

chr = "this 'is' valid"
chr = 'this "is" valid'
```

We can create an empty string with `empty_str = ""` or an empty character vector with `empty_chr = character(0)`. Both have class “character” but the empty string has length equal to 1 while the empty character vector has length equal to zero.

```empty_str = ""
empty_chr = character(0)

class(empty_str)
[1] "character"
class(empty_chr)
[1] "character"

length(empty_str)
[1] 1
length(empty_chr)
[1] 0
```

The function `character()` will create a character vector with as many empty strings as we want. We can add new components to the character vector just by assigning it to an index outside the current valid range. The index does not need to be consecutive, in which case R will auto-complete it with `NA` elements.

```chr_vector = character(2) # create char vector
chr_vector
[1] "" ""

chr_vector[3] = "three" # add new element
chr_vector
[1] ""      ""      "three"

chr_vector[5] = "five" # do not need to
# be consecutive
chr_vector
[1] ""      ""      "three" NA      "five"
```

Auxiliary functions

The functions `as.character()` and `is.character()` can be used to convert non-character objects into character strings and to test if a object is of type “character”, respectively.

Strings and data objects

R has five main types of objects to store data: `vector`, `factor`, multi-dimensional `array`, `data.frame` and `list`. It is interesting to know how these objects behave when exposed to different types of data (e.g. `character`, `numeric`, `logical`).

• `vector`: Vectors must have their values all of the same mode. If we combine mixed types of data in vectors, strings will dominate.
• `arrays`: A matrix, which is a 2-dimensional array, have the same behavior found in vectors.
• `data.frame`: By default, a column that contains a character string in it is converted to factors. If we want to turn this default behavior off we can use the argument `stringsAsFactors = FALSE` when constructing the `data.frame` object.
• `list`: Each element on the list will maintain its corresponding mode.
```# character dominates vector
c(1, 2, "text")
[1] "1"    "2"    "text"

# character dominates arrays
rbind(1:3, letters[1:3])
[,1] [,2] [,3]
[1,] "1"  "2"  "3"
[2,] "a"  "b"  "c"

# data.frame with stringsAsFactors = TRUE (default)
df1 = data.frame(numbers = 1:3, letters = letters[1:3])
df1
numbers letters
1       1       a
2       2       b
3       3       c

str(df1, vec.len=1)
'data.frame':  3 obs. of  2 variables:
\$ numbers: int  1 2 ...
\$ letters: Factor w/ 3 levels "a","b","c": 1 2 ...

# data.frame with stringsAsFactors = FALSE
df2 = data.frame(numbers = 1:3, letters = letters[1:3],
stringsAsFactors = FALSE)
df2
numbers letters
1       1       a
2       2       b
3       3       c

str(df2, vec.len=1)
'data.frame':  3 obs. of  2 variables:
\$ numbers: int  1 2 ...
\$ letters: chr  "a" ...

# Each element in a list has its own type
list(1:3, letters[1:3])
[[1]]
[1] 1 2 3

[[2]]
[1] "a" "b" "c"
```

Character encoding

R provides functions to deal with various set of encoding schemes. The `Encoding()` function returns the encoding of a string. `iconv()` converts the encoding.

```chr = "lá lá"
Encoding(chr)
[1] "UTF-8"

chr = iconv(chr, from = "UTF-8",
to = "latin1")
Encoding(chr)
[1] "latin1"
```

References: