video_scores {transforEmotion}R Documentation

Run FER on YouTube video

Description

This function retrieves FER scores a specific number of frames extracted from YouTube video. It uses Python libraries for facial recognition and emotion detection in text, images, and videos.

Usage

video_scores(
  video,
  classes,
  nframes = 100,
  face_selection = "largest",
  start = 0,
  end = -1,
  uniform = FALSE,
  ffreq = 15,
  save_video = FALSE,
  save_frames = FALSE,
  save_dir = "temp/",
  video_name = "temp"
)

Arguments

video

The URL of the YouTube video to analyze.

classes

A character vector specifying the classes to analyze.

nframes

The number of frames to analyze in the video. Default is 100.

face_selection

The method for selecting faces in the video. Options are "largest", "left", or "right". Default is "largest".

start

The start time of the video range to analyze. Default is 0.

end

The end time of the video range to analyze. Default is -1 and this means that video won't be cut. If end is a positive number greater than start, the video will be cut from start to end.

uniform

Logical indicating whether to uniformly sample frames from the video. Default is FALSE.

ffreq

The frame frequency for sampling frames from the video. Default is 15.

save_video

Logical indicating whether to save the analyzed video. Default is FALSE.

save_frames

Logical indicating whether to save the analyzed frames. Default is FALSE.

save_dir

The directory to save the analyzed frames. Default is "temp/".

video_name

The name of the analyzed video. Default is "temp".

Value

A result object containing the analyzed video scores.

Author(s)

Aleksandar Tomašević <atomashevic@gmail.com>


[Package transforEmotion version 0.1.4 Index]