create_deberta_v2_model {aifeducation}R Documentation

Function for creating a new transformer based on DeBERTa-V2

Description

This function creates a transformer configuration based on the DeBERTa-V2 base architecture and a vocabulary based on SentencePiece tokenizer by using the python libraries 'transformers' and 'tokenizers'.

Usage

create_deberta_v2_model(
  ml_framework = aifeducation_config$get_framework(),
  model_dir,
  vocab_raw_texts = NULL,
  vocab_size = 128100,
  do_lower_case = FALSE,
  max_position_embeddings = 512,
  hidden_size = 1536,
  num_hidden_layer = 24,
  num_attention_heads = 24,
  intermediate_size = 6144,
  hidden_act = "gelu",
  hidden_dropout_prob = 0.1,
  attention_probs_dropout_prob = 0.1,
  sustain_track = TRUE,
  sustain_iso_code = NULL,
  sustain_region = NULL,
  sustain_interval = 15,
  trace = TRUE,
  pytorch_safetensors = TRUE
)

Arguments

ml_framework

string Framework to use for training and inference. ml_framework="tensorflow" for 'tensorflow' and ml_framework="pytorch" for 'pytorch'.

model_dir

string Path to the directory where the model should be saved.

vocab_raw_texts

vector containing the raw texts for creating the vocabulary.

vocab_size

int Size of the vocabulary.

do_lower_case

bool If TRUE all characters are transformed to lower case.

max_position_embeddings

int Number of maximal position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model.

hidden_size

int Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding.

num_hidden_layer

int Number of hidden layers.

num_attention_heads

int Number of attention heads.

intermediate_size

int Number of neurons in the intermediate layer of the attention mechanism.

hidden_act

string name of the activation function.

hidden_dropout_prob

double Ratio of dropout.

attention_probs_dropout_prob

double Ratio of dropout for attention probabilities.

sustain_track

bool If TRUE energy consumption is tracked during training via the python library codecarbon.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes.

sustain_region

Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html

sustain_interval

integer Interval in seconds for measuring power usage.

trace

bool TRUE if information about the progress should be printed to the console.

pytorch_safetensors

bool If TRUE a 'pytorch' model is saved in safetensors format. If FALSE or 'safetensors' not available it is saved in the standard pytorch format (.bin). Only relevant for pytorch models.

Value

This function does not return an object. Instead the configuration and the vocabulary of the new model are saved on disk.

Note

To train the model, pass the directory of the model to the function train_tune_deberta_v2_model.

For this model a WordPiece tokenizer is created. The standard implementation of DeBERTa version 2 from HuggingFace uses a SentencePiece tokenizer. Thus, please use AutoTokenizer from the 'transformers' library to use this model.

References

He, P., Liu, X., Gao, J. & Chen, W. (2020). DeBERTa: Decoding-enhanced BERT with Disentangled Attention. doi:10.48550/arXiv.2006.03654

Hugging Face Documentation https://huggingface.co/docs/transformers/model_doc/deberta-v2#debertav2

See Also

Other Transformer: create_bert_model(), create_funnel_model(), create_longformer_model(), create_roberta_model(), train_tune_bert_model(), train_tune_deberta_v2_model(), train_tune_funnel_model(), train_tune_longformer_model(), train_tune_roberta_model()


[Package aifeducation version 0.3.3 Index]