serve_savedmodel {tfdeploy} | R Documentation |
Serve a SavedModel
Description
Serve a TensorFlow SavedModel as a local web api.
Usage
serve_savedmodel(model_dir, host = "127.0.0.1", port = 8089,
daemonized = FALSE, browse = !daemonized)
Arguments
model_dir |
The path to the exported model, as a string. |
host |
Address to use to serve model, as a string. |
port |
Port to use to serve model, as numeric. |
daemonized |
Makes 'httpuv' server daemonized so R interactive sessions are not blocked to handle requests. To terminate a daemonized server, call 'httpuv::stopDaemonizedServer()' with the handle returned from this call. |
browse |
Launch browser with serving landing page? |
See Also
Examples
## Not run:
# serve an existing model over a web interface
tfdeploy::serve_savedmodel(
system.file("models/tensorflow-mnist", package = "tfdeploy")
)
## End(Not run)
[Package tfdeploy version 0.6.1 Index]