Pipeline-class {PreProcess} | R Documentation |
A Pipeline
represents a standard multi-step procedure for
processing microarray data. A Pipeline
represents a series of
Processor
s that should be applied in order. You can
think of a pipeline as a completely defined (and reusable) set of
transformations that is applied uniformly to every microarray in a
data set.
## S4 method for signature 'ANY, Pipeline': process(object, action, parameter=NULL) ## S4 method for signature 'Pipeline': summary(object, ...) makeDefaultPipeline(ef = PROC.SIGNAL, ep = 0, nf = PROC.GLOBAL.NORMALIZATION, np = 0, tf = PROC.THRESHOLD, tp = 25, lf = PROC.LOG.TRANSFORM, lp = 2, name = "standard pipe", description = "my method")
object |
In the process method, any object appropriate for
the input to the Pipeline . In the summary method, a
Pipeline object. |
action |
A Pipeline object used to process an object. |
parameter |
Irrelevant, since the Pipline ignores the
parameter when process is invoked. |
... |
Additional arguments are as in the underlying generic methods. |
ef |
``Extractor function'': First Processor in
the Pipeline , typically a method that extracts a single kind
of raw measurement from a microarray |
ep |
Default parameter value for ef |
nf |
``Normalization function'' : Second Processor in
the Pipeline , typically a normalization step. |
np |
Default parameter value for nf |
tf |
``Threshold function'' : Third Processor in
the Pipeline , typically a step that truncates data below at
some threshold. |
tp |
Default parameter value for tf |
lf |
``Log function'' : Fourth Processor in
the Pipeline , typically a log transformation. |
lp |
Default parameter value for lf |
name |
A string; the name of the pipeline |
description |
A string; a longer description of the pipeline |
A key feature of a Pipeline
is that it is supposed to represent
a standard algorithm that is applied to all objects when processing a
microarray data set. For that reason, the parameter
that can be
passed to the process
function is ignored, ensuring that the
same parameter values are used to process all objects. By contrast,
each Processor
that is inserted into a Pipeline
allows the user to supply a parameter that overrides its default
value.
We provide a single constructor, makeDefaultPipeline
to build a
specialized kind of Pipeline
, tailored to the analysis of
fluorescently labeled single channels in a microarray experiment. More
general Pipeline
s can be constructed using new
.
The return value of the generic function process
is always
an object related to its input, which keeps a record of its
history. The precise class of the result depends on the functions used
to create the Pipeline
.
proclist
:Processor
objects.name
:description
:Pipeline
action
to the
object, updating its history appropriately. The parameter
is ignored, since the Pipeline
always uses its default
values.
The library comes with two Pipeline
objects already defined
PIPELINE.STANDARD
Channel
object
as input. Performs global normalization by rescaling the 75th
percentile to 1000, truncates below at 25, then performs log
(base-two) transformation.
PIPELINE.MDACC.DEFAULT
CompleteChannel
as input, extracts the raw signal
intensity, and then performs the same processing as
PIPELINE.STANDARD
.
Kevin R. Coombes <kcoombes@mdanderson.org>
Channel
, process
,
CompleteChannel
# simulate a moderately realistic looking microarray nc <- 100 nr <- 100 v <- rexp(nc*nr, 1/1000) b <- rnorm(nc*nr, 80, 10) s <- sapply(v-b, max, 1) ct <- ChannelType('user', 'random', nc, nr, 'fake') subbed <- Channel(name='fraud', parent='', type=ct, x=s) rm(ct, nc, nr, v, b, s) # clean some stuff # example of standard data processing processed <- process(subbed, PIPELINE.STANDARD) summary(processed) par(mfrow=c(2,1)) plot(processed) hist(processed) par(mfrow=c(1,1)) image(processed) rm(subbed, processed)