bedrockruntime_invoke_model_with_response_stream {paws.machine.learning} | R Documentation |
Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body
Description
Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. The response is returned in a stream.
See https://www.paws-r-sdk.com/docs/bedrockruntime_invoke_model_with_response_stream/ for full documentation.
Usage
bedrockruntime_invoke_model_with_response_stream(
body,
contentType = NULL,
accept = NULL,
modelId,
trace = NULL,
guardrailIdentifier = NULL,
guardrailVersion = NULL
)
Arguments
body |
[required] The prompt and inference parameters in the format specified in the
|
contentType |
The MIME type of the input data in the request. The default value is
|
accept |
The desired MIME type of the inference body in the response. The default
value is |
modelId |
[required] The unique identifier of the model to invoke to run inference. The
|
trace |
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace. |
guardrailIdentifier |
The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation. An error is thrown in the following situations.
|
guardrailVersion |
The version number for the guardrail. The value can also be |