exprso: an R-package for the rapid implementation... | F1000Research
ALL Metrics
-
Views
Get PDF
Get XML
Cite
Export
Track
Software Tool Article

exprso: an R-package for the rapid implementation of machine learning algorithms

[version 1; peer review: 1 approved, 1 approved with reservations]
PUBLISHED 27 Oct 2016
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Artificial Intelligence and Machine Learning gateway.

This article is included in the RPackage gateway.

This article is included in the Machine learning: life sciences collection.

Abstract

Machine learning plays a major role in many scientific investigations. However, non-expert programmers may struggle to implement the elaborate pipelines necessary to build highly accurate and generalizable models. We introduce here a new R package, exprso, as an intuitive machine learning suite designed specifically for non-expert programmers. Built primarily for the classification of high-dimensional data, exprso uses an object-oriented framework to encapsulate a number of common analytical methods into a series of interchangeable modules. This includes modules for feature selection, classification, high-throughput parameter grid-searching, elaborate cross-validation schemes (e.g., Monte Carlo and nested cross-validation), ensemble classification, and prediction. In addition, exprso provides native support for multi-class classification through the 1-vs-all generalization of binary classifiers. In contrast to other machine learning suites, we have prioritized simplicity of use over expansiveness when designing exprso.

Keywords

R, package, machine learning, classification, cross-validation, machine learning, supervised, unsupervised, genomics, prediction

Introduction

Supervised machine learning has an increasingly important role in biological studies. However, the sheer complexity of classification pipelines poses a significant barrier to expert biologists unfamiliar with the intricacies of machine learning. Moreover, many biologists lack the time or technical skills necessary to establish their own classification pipelines. Here we discuss the exprso package, a framework for the rapid implementation of high-throughput classification, tailored specifically for use with high-dimensional data. As such, this package aims to empower investigators to execute state-of-the-art binary and multi-class classification, including deep learning, with minimal programming experience necessary.

Although R offers a tremendous number of high-quality classification packages, there exists only a handful of fully integrated machine learning suites for R. Of these, we recognize here the caret package which offers an expansive toolkit for both classification and regression analyses1. Otherwise, we acknowledge the RWeka package which provides an API to the popular Weka machine learning suite, originally written in Java2. While these packages have a vast repertoire of functionality, we believe the exprso package has three key advantages. First, this package employs an object-oriented design that makes the software intuitive to lay programmers. In place of a few, elaborate functions that offer power at the expense of convenience, this package makes use of more, simpler functions whereby each constituent event has its own method that users can combine in tandem to create their own custom analytical pipeline.

Second, this package exposes carefully crafted modules which simplify several high-throughput classification pipelines. Single functions, coupled with special argument handlers, manage sophisticated pipelines such as high-throughput parameter grid-searching, Monte Carlo cross-validation3, and nested cross-validation4. Moreover, users can embed these high-throughput modules (e.g., parameter grid-searching) within other modules (e.g., Monte Carlo cross-validation), allowing for infinite possibility. In addition, this package provides an automated way to build ensemble classifiers from the results of these high-throughput modules.

Third, this package prioritizes multi-class classification by generalizing binary classification methods to a multiclass context. Specifically, this package automatically executes 1-vs-all classification and prediction whenever working with a dataset that contains multiple class labels. In addition, this package provides a specialized high-throughput module for 1-vs-all classification with individual 1-vs-all feature selection, an alternative to conventional multi-class classification which has been reported to improve results, at least in the setting of 1-vs-1 multi-class support vector machines5.

While we acknowledge that the premier machine learning suites, like caret, may surpass our package in the breadth of their functionality, we do not intend to replace this tool. Rather, we developed exprso as an adjunct, or alternative, tailored specifically to those with limited programming experience, especially biologists working with high-dimensional data. That said, we hope that even some expert programmers may find value in this software tool.

Methods

Implementation

This package uses an object-oriented framework for classification. In this paradigm, every unique task, such as data splitting (i.e., creating the training and validation sets), feature selection, and classifier construction, has its own associated function, called a method. These methods typically work as wrappers for other R packages, structured so that the objects returned by one method will feed seamlessly into the next method.

In other words, each method represents one of a number of analytical modules that provides the user with stackable and interchangeable data processing tools. Examples of these methods include wrappers for popular feature selection methods (e.g., analysis of variance (ANOVA), recursive feature elimination6,7, empiric Bayes statistic8, minimum redundancy maximum relevancy (mRMR)9, and more) as well as numerous classification methods (e.g., support vector machines (SVM)10, neural networks11, deep neural networks12, random forests13, and more).

We have adopted a nomenclature to help organize the methods available in this package. In this scheme, most functions have a few letters in the beginning of their name to designate their general utility. Below, we include a brief description of these function prefixes along with a flow diagram of the available methods.

  • array: Modules that import data stored as a data.frame, ExpressionSet object, or local text file.

  • mod: Modules that modify the imported data prior to classification.

  • split: Modules that split these data into training and validation (or test) sets.

  • fs: Modules that perform feature selection.

  • build: Modules that build classifiers and classifier ensembles.

  • predict: Modules that deploy classifiers and classifier ensembles.

  • calc: Modules that calculate classifier performance, including area under the receiver operating characteristic (ROC) curve (AUC).

  • pl: Modules that manage elaborate classification pipelines, including high-throughput parameter gridsearches, Monte Carlo cross-validation, and nested cross-validation.

  • pipe: Modules that filter the classification pipeline results.

af82f138-3516-429f-8a51-82f6e59b0628_figure1.gif

Figure 1. A directed graph of all modules included in the exprso package and how they might relate to one another in a complete pipeline.

Elements colored grey exist outside of this package and instead refer to natively compatible components from the GEOquery14 and Biobase15 packages.

We refer the reader to the package vignette, “An Introduction to the exprso Package,” hosted with the package on the Comprehensive R Archive Network (CRAN), for a detailed description of object-oriented framework and methods used in this package16.

Operation

Specific computer hardware requirements will depend on the dimensions of the dataset under study, the methods deployed on that dataset, and the extent of any high-throughput analyses used. For the most part, however, a standard laptop computer with the latest version of R installed will handle most applications of the exprso package.

Use cases

To showcase this package, we make use of the publicly available hallmark Golub 1999 dataset to differentiate acute lymphocytic leukemia (ALL) from acute myelogenous leukemia (AML) based on gene expression as measured by microarray technology17. We begin by importing this dataset as an ExpressionSet object from the package GolubEsets (version 1.16.0)18. Then, using the arrayExprs function, we load the ExpressionSet object into exprso. Next, using the modFilter, modTransform, and modNormalize methods, we threshold filter, log2 transform, and standardize the data, respectively, reproducing the pre-processing steps taken by the original investigators19.

To keep the code clear and concise, we make use of the %>% function notation from the magrittr package20. In short, this function passes the result from the previous function call to the first argument of the next function, an action colloquially known as piping.

library(exprso)
library(golubEsets)
library(magrittr)
data(Golub_Merge)
array <-
arrayExprs(Golub_Merge,
colBy = "ALL.AML",
include = list("ALL","AML"))%>%
modFilter(20, 16000, 500, 5) %>%
modTransform %>%
modNormalize

Then, using the splitSample method, one of the split methods shown in the above diagram, we partition the data into a training and a test set through random sampling without replacement. Next, we perform a series of feature selection methods on the extracted training set. Through the fs modules fsStats and fsPrcomp, we pass the top 50 features as selected by the Student’s t-test through dimension reduction by principal components analysis (PCA).

splitSets <- splitSample(array, percent.include = 67)
array.fs <-
trainingSet(splitSets) %>%
fsStats(how = "t.test") %>%
fsPrcomp(top = 50)

With feature selection complete, we can construct a classifier. For this example, we use the buildSVM method to train a linear kernel support vector machine (SVM) (with default parameters) using the top 5 principal components. Then, we deploy the trained machine on the test set from above. Note that through the objectoriented framework, each feature selection event, including the rules for dimension reduction by PCA, gets passed along automatically until classifier prediction. This ensures that the test set always undergoes the same feature selection history as the training set. The calcStats function allows us to calculate classifier performance as sensitivity, specificity, accuracy, or area under the curve (AUC)21,22.

pred <-
array.fs %>%
buildSVM(top = 5, kernel = "linear") %>%
predict(testSet(splitSets))
calcStats(pred)

When constructing a classifier using a build module, we can only specify one set of parameters at a time. However, investigators often want to test models across a vast range of parameters. For this reason, we provide methods like plGrid to automate high-throughput parameter grid-searches. These methods not only wrap classifier construction, but classifier deployment as well. In addition, they accept a fold argument to toggle leave-one-out or v-fold cross-validation.

Below, we show a simple example of parameter grid-searching, whereby the top 3, 5, and 10 principal components, as established above, get used as a substrate for the construction of linear and radial kernel SVMs with costs of 1, 101, and 1001. In addition, we calculate a biased 10-fold cross-validation accuracy to help guide our choice of the final model parameters. Take note that we call this accuracy biased because we are performing cross-validation on a dataset that has already undergone feature selection. Although this approach gives a poor assessment of absolute classifier performance23, it may still have value in helping to guide parameter selection in a statistically valid manner. As an alternative to this biased cross-validation accuracy, users can instead call the plNested method in which feature selection is performed anew with each data split that occurs during the leave-one-out or v-fold cross-validation.

pl <-
plGrid(array.fs, testSet(splitSets),
top = c(3, 5, 10),
how = "buildSVM",
kernel = c("linear", "radial"),
cost = c(1, 101, 1001),
fold = 10)

Finally, we show an example for the plMonteCarlo method, an implementation of Monte Carlo cross-validation. Compared to the plGrid method which iteratively builds and deploys classifiers on a validation (or test) set, plMonteCarlo wraps multiple iterations of data splitting, feature selection, and parameter grid-searching. The final result therefore contains the classifier performances as measured on a number of bootstraps carved out from the initial dataset. Argument handler functions help organize the arguments supplied to the split, feature selection, and high-throughput methods managed during the plMonteCarlo method call.

Take note that when using the Monte Carlo cross-validation method (or any of the other pl modules), the user may iterate over any classification method provided by exprso, not only buildSVM. This includes the buildDNN method for deep neural networks as implemented via h2o12. Also note that the user can embed other cross-validation methods, such as another Monte Carlo or nested method, within the cross-validation method call, allowing for endless combinatory possibility.

In the first section of the code below, we define the argument handler functions for the plMonteCarlo call. As suggested by their names, the ctrlSplitSet, ctrlFeatureSelect, and ctrlGridSearch handlers manage arguments to data splitting, feature selection, and high-throughput grid-searching, respectively. In this example, we set up arguments to split the unaltered training set through random sampling with replacement, perform the two-step feature selection process from above, and run a high-throughput parameter grid-search with biased cross-validation. The unaltered dataset is processed in this way 10 times, as directed by argument B.

ss <-
ctrlSplitSet(func = "splitSample",
percent.include = 67,
replace = TRUE)
fs <-
list(ctrlFeatureSelect(func = "fsStats",
how = "t.test"),
ctrlFeatureSelect(func = "fsPrcomp",
top = 50))
gs <-
ctrlGridSearch(func = "plGrid",
top = c(3, 5, 10),
how = "buildSVM",
kernel = c("linear", "radial"),
cost = c(1, 101, 1001),
fold = 10)
boot <-
plMonteCarlo(trainingSet(splitSets),
B = 10,
ctrlSS = ss,
ctrlFS = fs,
ctrlGS = gs)

As an adjunct to this bootstrapping pipeline, the user can apply these results to build a classifier ensemble using the best classifier from each bootstrap, then deploy that classifier on the withheld test set. Analogous to how random forests will deploy an ensemble of decision trees24, this method, which we dub “random plains”, will deploy an ensemble of SVMs.

ens <- buildEnsemble(boot, colBy = "valid.acc", top = 1)
pred <- predict(ens, testSet(splitSets))

Beyond those mentioned here, this package also includes methods for integrating unsupervised machine learning (i.e., clustering) into classification pipelines. In addition, exprso contains high-throughput methods specialized for multi-class classification. We refer the reader to the package vignettes, “An Introduction to the exprso Package” and “Advanced Topics for the exprso Package”, both hosted with the package on the Comprehensive R Archive Network (CRAN), for a detailed description of all methods included in this package16.

Summary

Here we introduce the R package exprso, a machine learning suite tailored specifically to working with high-dimensional data. Unlike other machine learning suites, we have prioritized simplicity of use over expansiveness. By developing this package in an object-oriented framework, we provide a fully interchangeable and modular programming interface that allows for the rapid implementation of binary and multi-class classification pipelines. We have included in this framework functions for executing some of most popular feature selection methods and classification algorithms. In addition, exprso also contains a number of modules that facilitate classification with high-throughput parameter grid-searching in conjunction with sophisticated crossvalidation schemes. Owing to its ease-of-use and extensive documentation, we hope exprso will serve as an indispensable resource, especially to scientific investigators with limited prior programming experience.

Software availability

Software available from: http://cran.r-project.org/web/packages/exprso/

Latest source code: http://github.com/tpq/exprso

Archived source code as at time of publication: http://doi.org/10.5281/zenodo.16206325

Software license: GNU General Public License, version 2.

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 27 Oct 2016
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Quinn T, Tylee D and Glatt S. exprso: an R-package for the rapid implementation of machine learning algorithms [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2016, 5:2588 (https://doi.org/10.12688/f1000research.9893.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 27 Oct 2016
Views
0
Cite
Reviewer Report 17 Feb 2017
Henry Löffler-Wirth, Leipzig University, Interdisciplinary Centre for Bioinformatics, Leipzig, 04107, Germany 
Approved with Reservations
VIEWS 0
The authors present a new tool integrating established algorithms into one R package. This tool is intended to provide access to state-of-the-art statistical analysis to users with limited programming experience. As the manuscript is rather a package vignette than an ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Löffler-Wirth H. Reviewer Report For: exprso: an R-package for the rapid implementation of machine learning algorithms [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2016, 5:2588 (https://doi.org/10.5256/f1000research.10664.r20319)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 14 Dec 2017
    Thomas Quinn, Bioinformatics Core Research Facility, Deakin University, Victoria, Australia
    14 Dec 2017
    Author Response
    Dear Henry,

    Thank you so much for taking the time to perform a detailed and critical review of the software. I regret that professional obligations have delayed me from addressing your ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 14 Dec 2017
    Thomas Quinn, Bioinformatics Core Research Facility, Deakin University, Victoria, Australia
    14 Dec 2017
    Author Response
    Dear Henry,

    Thank you so much for taking the time to perform a detailed and critical review of the software. I regret that professional obligations have delayed me from addressing your ... Continue reading
Views
0
Cite
Reviewer Report 09 Dec 2016
Dariusz Plewczynski, Centre of New Technologies, University of Warsaw, Warsaw, Poland 
Julian Zubek, Center of New Technologies, University of Warsaw, Warsaw, Poland 
Approved
VIEWS 0
This is a short software tool article presenting a new R package for implementing machine learning pipelines. According to the authors it is targeted specifically at non-expert programmers analyzing high-dimensional biological data. The article briefly describes design goals and implementation ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Plewczynski D and Zubek J. Reviewer Report For: exprso: an R-package for the rapid implementation of machine learning algorithms [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2016, 5:2588 (https://doi.org/10.5256/f1000research.10664.r18380)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 27 Oct 2016
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.