bedrockruntime_invoke_model {paws.machine.learning} | R Documentation |
Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body
Description
Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. You use model inference to generate text, images, and embeddings.
See https://www.paws-r-sdk.com/docs/bedrockruntime_invoke_model/ for full documentation.
Usage
bedrockruntime_invoke_model(
body,
contentType = NULL,
accept = NULL,
modelId,
trace = NULL,
guardrailIdentifier = NULL,
guardrailVersion = NULL
)
Arguments
body |
[required] The prompt and inference parameters in the format specified in the
|
contentType |
The MIME type of the input data in the request. The default value is
|
accept |
The desired MIME type of the inference body in the response. The default
value is |
modelId |
[required] The unique identifier of the model to invoke to run inference. The
|
trace |
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace. |
guardrailIdentifier |
The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation. An error will be thrown in the following situations.
|
guardrailVersion |
The version number for the guardrail. The value can also be |