palm.txt {PaLMr}R Documentation

Generate text using the Google PaLM 2 text model based on a prompt

Description

This function sends a prompt to the Google PaLM 2 text model and generates text as a response. It allows customization of the generated text using various parameters.

Usage

palm.txt(
  model.parameter,
  prompt,
  temperature = 0.7,
  maxOutputTokens = 1024,
  topP = 0.95,
  topK = 40,
  htUnspecified = "meda",
  htDerogatory = "meda",
  htToxicity = "meda",
  htViolence = "meda",
  htSexual = "meda",
  htMedical = "meda",
  htDangerous = "meda"
)

Arguments

model.parameter

A character vector containing the API key, model version, and proxy status. Model version and type are specified by Google. See function palm.connect for detail.

prompt

A character string representing the query or prompt for text generation. The length of the query should be between 1 and 8196 characters, inclusive.

temperature

A numeric value between 0.0 and 1.0, inclusive (default: 0.7). Controls the randomness of the generated text. A higher value (e.g., 0.9) results in more creative responses, while a lower value (e.g., 0.3) produces more straightforward text.

maxOutputTokens

An integer value (default: 1024). Specifies the maximum number of tokens to include in the generated text.

topP

A numeric value (default: 0.95). Defines the maximum cumulative probability of tokens considered when sampling. It controls the diversity of the text generated.

topK

An integer value (default: 40). Sets the maximum number of tokens to consider when sampling.

htUnspecified

Safety setting threshold for unspecified harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

htDerogatory

Safety setting threshold for derogatory harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

htToxicity

Safety setting threshold for toxicity harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

htViolence

Safety setting threshold for violence harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

htSexual

Safety setting threshold for sexual harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

htMedical

Safety setting threshold for medical harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

htDangerous

Safety setting threshold for dangerous harm. The default threshold is "meda". Valid options are as follows.

"unsp"

HARM_BLOCK_THRESHOLD_UNSPECIFIED

"lowa"

BLOCK_LOW_AND_ABOVE

"meda"

BLOCK_MEDIUM_AND_ABOVE

"high"

BLOCK_ONLY_HIGH

"none"

BLOCK_NONE

Details

This function interacts with the Google PaLM model by sending a query using the specified parameters. It allows you to customize the generated text by adjusting the 'temperature', 'maxOutputTokens', 'topP', 'topK', and safety settings.

If the function is successful, it returns a character string containing the generated text. If an error occurs during the API request, it will stop execution and provide an error message.

The 'model.parameter' argument should be a character vector with the API key, model version, and model type provided by Google. You can obtain this information by following the instructions provided by Google for using the PaLM API.

The safety settings control the content's safety level based on different harm categories. Harm thresholds are specified as per Google's guidelines and can be customized to control the content generated.

Value

A character string generated by the Google PaLM 2 API based on the provided prompt and parameters.

See Also

PaLMr - Documentation

Safety Setting - Google AI for Developers

HarmCategory - Google AI for Developers

Examples

## Not run: 
# Connect to the model, replace API_KEY with your api key
palm.model = palm.connect("v1beta2",
                          "API_KEY",
                          FALSE)

prompt = "Write a story about a magic backpack."
generated.text = palm.txt(palm.model,
                          prompt)
cat(generated.text)

## End(Not run)


[Package PaLMr version 0.2.0 Index]