tokenizer_set {elastic} | R Documentation |
Tokenizer operations
Description
Tokenizer operations
Usage
tokenizer_set(conn, index, body, ...)
Arguments
conn |
an Elasticsearch connection object, see |
index |
(character) A character vector of index names |
body |
Query, either a list or json. |
... |
Curl options passed on to crul::HttpClient |
Author(s)
Scott Chamberlain myrmecocystus@gmail.com
References
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html
Examples
## Not run:
# connection setup
(x <- connect())
# set tokenizer
## NGram tokenizer
body <- '{
"settings" : {
"analysis" : {
"analyzer" : {
"my_ngram_analyzer" : {
"tokenizer" : "my_ngram_tokenizer"
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : "2",
"max_gram" : "3",
"token_chars": [ "letter", "digit" ]
}
}
}
}
}'
if (index_exists('test1')) index_delete('test1')
tokenizer_set(index = "test1", body=body)
index_analyze(text = "hello world", index = "test1",
analyzer='my_ngram_analyzer')
## End(Not run)
[Package elastic version 1.2.0 Index]