Keywords
R, package, machine learning, classification, cross-validation, machine learning, supervised, unsupervised, genomics, prediction
This article is included in the Artificial Intelligence and Machine Learning gateway.
This article is included in the RPackage gateway.
This article is included in the Machine learning: life sciences collection.
R, package, machine learning, classification, cross-validation, machine learning, supervised, unsupervised, genomics, prediction
Supervised machine learning has an increasingly important role in biological studies. However, the sheer complexity of classification pipelines poses a significant barrier to expert biologists unfamiliar with the intricacies of machine learning. Moreover, many biologists lack the time or technical skills necessary to establish their own classification pipelines. Here we discuss the exprso package, a framework for the rapid implementation of high-throughput classification, tailored specifically for use with high-dimensional data. As such, this package aims to empower investigators to execute state-of-the-art binary and multi-class classification, including deep learning, with minimal programming experience necessary.
Although R offers a tremendous number of high-quality classification packages, there exists only a handful of fully integrated machine learning suites for R. Of these, we recognize here the caret package which offers an expansive toolkit for both classification and regression analyses1. Otherwise, we acknowledge the RWeka package which provides an API to the popular Weka machine learning suite, originally written in Java2. While these packages have a vast repertoire of functionality, we believe the exprso package has three key advantages. First, this package employs an object-oriented design that makes the software intuitive to lay programmers. In place of a few, elaborate functions that offer power at the expense of convenience, this package makes use of more, simpler functions whereby each constituent event has its own method that users can combine in tandem to create their own custom analytical pipeline.
Second, this package exposes carefully crafted modules which simplify several high-throughput classification pipelines. Single functions, coupled with special argument handlers, manage sophisticated pipelines such as high-throughput parameter grid-searching, Monte Carlo cross-validation3, and nested cross-validation4. Moreover, users can embed these high-throughput modules (e.g., parameter grid-searching) within other modules (e.g., Monte Carlo cross-validation), allowing for infinite possibility. In addition, this package provides an automated way to build ensemble classifiers from the results of these high-throughput modules.
Third, this package prioritizes multi-class classification by generalizing binary classification methods to a multiclass context. Specifically, this package automatically executes 1-vs-all classification and prediction whenever working with a dataset that contains multiple class labels. In addition, this package provides a specialized high-throughput module for 1-vs-all classification with individual 1-vs-all feature selection, an alternative to conventional multi-class classification which has been reported to improve results, at least in the setting of 1-vs-1 multi-class support vector machines5.
While we acknowledge that the premier machine learning suites, like caret, may surpass our package in the breadth of their functionality, we do not intend to replace this tool. Rather, we developed exprso as an adjunct, or alternative, tailored specifically to those with limited programming experience, especially biologists working with high-dimensional data. That said, we hope that even some expert programmers may find value in this software tool.
This package uses an object-oriented framework for classification. In this paradigm, every unique task, such as data splitting (i.e., creating the training and validation sets), feature selection, and classifier construction, has its own associated function, called a method. These methods typically work as wrappers for other R packages, structured so that the objects returned by one method will feed seamlessly into the next method.
In other words, each method represents one of a number of analytical modules that provides the user with stackable and interchangeable data processing tools. Examples of these methods include wrappers for popular feature selection methods (e.g., analysis of variance (ANOVA), recursive feature elimination6,7, empiric Bayes statistic8, minimum redundancy maximum relevancy (mRMR)9, and more) as well as numerous classification methods (e.g., support vector machines (SVM)10, neural networks11, deep neural networks12, random forests13, and more).
We have adopted a nomenclature to help organize the methods available in this package. In this scheme, most functions have a few letters in the beginning of their name to designate their general utility. Below, we include a brief description of these function prefixes along with a flow diagram of the available methods.
array: Modules that import data stored as a data.frame, ExpressionSet object, or local text file.
mod: Modules that modify the imported data prior to classification.
split: Modules that split these data into training and validation (or test) sets.
fs: Modules that perform feature selection.
build: Modules that build classifiers and classifier ensembles.
predict: Modules that deploy classifiers and classifier ensembles.
calc: Modules that calculate classifier performance, including area under the receiver operating characteristic (ROC) curve (AUC).
pl: Modules that manage elaborate classification pipelines, including high-throughput parameter gridsearches, Monte Carlo cross-validation, and nested cross-validation.
pipe: Modules that filter the classification pipeline results.
We refer the reader to the package vignette, “An Introduction to the exprso Package,” hosted with the package on the Comprehensive R Archive Network (CRAN), for a detailed description of object-oriented framework and methods used in this package16.
Specific computer hardware requirements will depend on the dimensions of the dataset under study, the methods deployed on that dataset, and the extent of any high-throughput analyses used. For the most part, however, a standard laptop computer with the latest version of R installed will handle most applications of the exprso package.
To showcase this package, we make use of the publicly available hallmark Golub 1999 dataset to differentiate acute lymphocytic leukemia (ALL) from acute myelogenous leukemia (AML) based on gene expression as measured by microarray technology17. We begin by importing this dataset as an ExpressionSet object from the package GolubEsets (version 1.16.0)18. Then, using the arrayExprs function, we load the ExpressionSet object into exprso. Next, using the modFilter, modTransform, and modNormalize methods, we threshold filter, log2 transform, and standardize the data, respectively, reproducing the pre-processing steps taken by the original investigators19.
To keep the code clear and concise, we make use of the %>% function notation from the magrittr package20. In short, this function passes the result from the previous function call to the first argument of the next function, an action colloquially known as piping.
library(exprso) library(golubEsets) library(magrittr) data(Golub_Merge) array <- arrayExprs(Golub_Merge, colBy = "ALL.AML", include = list("ALL","AML"))%>% modFilter(20, 16000, 500, 5) %>% modTransform %>% modNormalize
Then, using the splitSample method, one of the split methods shown in the above diagram, we partition the data into a training and a test set through random sampling without replacement. Next, we perform a series of feature selection methods on the extracted training set. Through the fs modules fsStats and fsPrcomp, we pass the top 50 features as selected by the Student’s t-test through dimension reduction by principal components analysis (PCA).
splitSets <- splitSample(array, percent.include = 67) array.fs <- trainingSet(splitSets) %>% fsStats(how = "t.test") %>% fsPrcomp(top = 50)
With feature selection complete, we can construct a classifier. For this example, we use the buildSVM method to train a linear kernel support vector machine (SVM) (with default parameters) using the top 5 principal components. Then, we deploy the trained machine on the test set from above. Note that through the objectoriented framework, each feature selection event, including the rules for dimension reduction by PCA, gets passed along automatically until classifier prediction. This ensures that the test set always undergoes the same feature selection history as the training set. The calcStats function allows us to calculate classifier performance as sensitivity, specificity, accuracy, or area under the curve (AUC)21,22.
pred <- array.fs %>% buildSVM(top = 5, kernel = "linear") %>% predict(testSet(splitSets)) calcStats(pred)
When constructing a classifier using a build module, we can only specify one set of parameters at a time. However, investigators often want to test models across a vast range of parameters. For this reason, we provide methods like plGrid to automate high-throughput parameter grid-searches. These methods not only wrap classifier construction, but classifier deployment as well. In addition, they accept a fold argument to toggle leave-one-out or v-fold cross-validation.
Below, we show a simple example of parameter grid-searching, whereby the top 3, 5, and 10 principal components, as established above, get used as a substrate for the construction of linear and radial kernel SVMs with costs of 1, 101, and 1001. In addition, we calculate a biased 10-fold cross-validation accuracy to help guide our choice of the final model parameters. Take note that we call this accuracy biased because we are performing cross-validation on a dataset that has already undergone feature selection. Although this approach gives a poor assessment of absolute classifier performance23, it may still have value in helping to guide parameter selection in a statistically valid manner. As an alternative to this biased cross-validation accuracy, users can instead call the plNested method in which feature selection is performed anew with each data split that occurs during the leave-one-out or v-fold cross-validation.
pl <- plGrid(array.fs, testSet(splitSets), top = c(3, 5, 10), how = "buildSVM", kernel = c("linear", "radial"), cost = c(1, 101, 1001), fold = 10)
Finally, we show an example for the plMonteCarlo method, an implementation of Monte Carlo cross-validation. Compared to the plGrid method which iteratively builds and deploys classifiers on a validation (or test) set, plMonteCarlo wraps multiple iterations of data splitting, feature selection, and parameter grid-searching. The final result therefore contains the classifier performances as measured on a number of bootstraps carved out from the initial dataset. Argument handler functions help organize the arguments supplied to the split, feature selection, and high-throughput methods managed during the plMonteCarlo method call.
Take note that when using the Monte Carlo cross-validation method (or any of the other pl modules), the user may iterate over any classification method provided by exprso, not only buildSVM. This includes the buildDNN method for deep neural networks as implemented via h2o12. Also note that the user can embed other cross-validation methods, such as another Monte Carlo or nested method, within the cross-validation method call, allowing for endless combinatory possibility.
In the first section of the code below, we define the argument handler functions for the plMonteCarlo call. As suggested by their names, the ctrlSplitSet, ctrlFeatureSelect, and ctrlGridSearch handlers manage arguments to data splitting, feature selection, and high-throughput grid-searching, respectively. In this example, we set up arguments to split the unaltered training set through random sampling with replacement, perform the two-step feature selection process from above, and run a high-throughput parameter grid-search with biased cross-validation. The unaltered dataset is processed in this way 10 times, as directed by argument B.
ss <- ctrlSplitSet(func = "splitSample", percent.include = 67, replace = TRUE) fs <- list(ctrlFeatureSelect(func = "fsStats", how = "t.test"), ctrlFeatureSelect(func = "fsPrcomp", top = 50)) gs <- ctrlGridSearch(func = "plGrid", top = c(3, 5, 10), how = "buildSVM", kernel = c("linear", "radial"), cost = c(1, 101, 1001), fold = 10) boot <- plMonteCarlo(trainingSet(splitSets), B = 10, ctrlSS = ss, ctrlFS = fs, ctrlGS = gs)
As an adjunct to this bootstrapping pipeline, the user can apply these results to build a classifier ensemble using the best classifier from each bootstrap, then deploy that classifier on the withheld test set. Analogous to how random forests will deploy an ensemble of decision trees24, this method, which we dub “random plains”, will deploy an ensemble of SVMs.
ens <- buildEnsemble(boot, colBy = "valid.acc", top = 1) pred <- predict(ens, testSet(splitSets))
Beyond those mentioned here, this package also includes methods for integrating unsupervised machine learning (i.e., clustering) into classification pipelines. In addition, exprso contains high-throughput methods specialized for multi-class classification. We refer the reader to the package vignettes, “An Introduction to the exprso Package” and “Advanced Topics for the exprso Package”, both hosted with the package on the Comprehensive R Archive Network (CRAN), for a detailed description of all methods included in this package16.
Here we introduce the R package exprso, a machine learning suite tailored specifically to working with high-dimensional data. Unlike other machine learning suites, we have prioritized simplicity of use over expansiveness. By developing this package in an object-oriented framework, we provide a fully interchangeable and modular programming interface that allows for the rapid implementation of binary and multi-class classification pipelines. We have included in this framework functions for executing some of most popular feature selection methods and classification algorithms. In addition, exprso also contains a number of modules that facilitate classification with high-throughput parameter grid-searching in conjunction with sophisticated crossvalidation schemes. Owing to its ease-of-use and extensive documentation, we hope exprso will serve as an indispensable resource, especially to scientific investigators with limited prior programming experience.
Software available from: http://cran.r-project.org/web/packages/exprso/
Latest source code: http://github.com/tpq/exprso
Archived source code as at time of publication: http://doi.org/10.5281/zenodo.16206325
Software license: GNU General Public License, version 2.
TQ designed and implemented the tool, applied the tool to the use case, and drafted the article. DT and SG helped design the tool and drafted the article. DT contributed code and performed extensive beta testing.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central Data from PMC are received and updated monthly. | - | - |
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 2 (revision) 06 Dec 17 | read | |
Version 1 27 Oct 16 | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)