accuracy {wordmap}R Documentation

Evaluate classification accuracy in precision and recall

Description

accuracy() counts the number of true positive, false positive, true negative, and false negative cases for each predicted class and calculates precision, recall and F1 score based on these counts. summary() calculates micro-average precision and recall, and macro-average precision and recall based on the output of accuracy().

Usage

accuracy(x, y)

## S3 method for class 'textmodel_wordmap_accuracy'
summary(object, ...)

Arguments

x

vector of predicted classes.

y

vector of true classes.

object

output of accuracy().

...

not used.

Value

accuracy() returns a data.frame with following columns:

tp

the number of true positive cases.

fp

the number of false positive cases.

tn

the number of true negative cases.

fn

the number of false negative cases.

precision

tp / (tp + fp).

recall

tp / (tp + fn).

f1

the harmonic mean of precision and recall.

summary() returns a named numeric vector with the following elements:

p

micro-average precision.

r

micro-average recall

P

macro-average precision.

R

macro-average recall.

Examples

class_pred <- c('US', 'GB', 'US', 'CN', 'JP', 'FR', 'CN') # prediction
class_true <- c('US', 'FR', 'US', 'CN', 'KP', 'EG', 'US') # true class
acc <- accuracy(class_pred, class_true)
print(acc)
summary(acc)

[Package wordmap version 0.8.0 Index]