stream_pipeline {stream} | R Documentation |
Create a Data Stream Pipeline
Description
Define a complete data stream pipe line
consisting of a data stream, filters and a data mining task using %>%
.
Usage
DST_Runner(dsd, dst)
Arguments
dsd |
A data stream (subclass of DSD) typically provided using a |
dst |
A data stream mining task (subclass of DST). |
Details
A data stream pipe line consisting of a data stream, filters and a data mining task:
DSD %>% DSF %>% DST_Runner
Once the pipeline is defined, it can be run using update()
where points are
taken from the DSD data stream source,
filtered through a sequence of DSF filters and then used to update
the DST task.
DST_Multi can be used to update multiple models in the pipeline with the same stream.
Author(s)
Michael Hahsler
See Also
Other DST:
DSAggregate()
,
DSC()
,
DSClassifier()
,
DSOutlier()
,
DSRegressor()
,
DST()
,
DST_SlidingWindow()
,
DST_WriteStream()
,
evaluate
,
predict()
,
update()
Examples
set.seed(1500)
# Set up a pipeline with a DSD data source, DSF Filters and then a DST task
cluster_pipeline <- DSD_Gaussians(k = 3, d = 2) %>%
DSF_Scale() %>%
DST_Runner(DSC_DBSTREAM(r = .3))
cluster_pipeline
# the DSD and DST can be accessed directly
cluster_pipeline$dsd
cluster_pipeline$dst
# update the DST using the pipeline, by default update returns the micro clusters
update(cluster_pipeline, n = 1000)
cluster_pipeline$dst
get_centers(cluster_pipeline$dst, type = "macro")
plot(cluster_pipeline$dst)