tokenize_word_stems {tokenizers} | R Documentation |
Word stem tokenizer
Description
This function turns its input into a character vector of word stems. This is
just a wrapper around the wordStem
function from the
SnowballC package which does the heavy lifting, but this function provides a
consistent interface with the rest of the tokenizers in this package. The
input can be a character vector of any length, or a list of character vectors
where each character vector in the list has a length of 1.
Usage
tokenize_word_stems(
x,
language = "english",
stopwords = NULL,
simplify = FALSE
)
Arguments
x |
A character vector or a list of character vectors to be tokenized.
If |
language |
The language to use for word stemming. This must be one of
the languages available in the SnowballC package. A list is provided by
|
stopwords |
A character vector of stop words to be excluded |
simplify |
|
Details
This function will strip all white space and punctuation and make all word stems lowercase.
Value
A list of character vectors containing the tokens, with one element
in the list for each element that was passed as input. If simplify =
TRUE
and only a single element was passed as input, then the output is a
character vector of tokens.
See Also
Examples
song <- paste0("How many roads must a man walk down\n",
"Before you call him a man?\n",
"How many seas must a white dove sail\n",
"Before she sleeps in the sand?\n",
"\n",
"How many times must the cannonballs fly\n",
"Before they're forever banned?\n",
"The answer, my friend, is blowin' in the wind.\n",
"The answer is blowin' in the wind.\n")
tokenize_word_stems(song)