projectKNNs {pagoda2} | R Documentation |
Project a distance matrix into a lower-dimensional space. (from elbamos/largeVis)
Description
Takes as input a sparse matrix of the edge weights connecting each node to its nearest neighbors, and outputs a matrix of coordinates embedding the inputs in a lower-dimensional space.
Usage
projectKNNs(
wij,
dim = 2,
sgd_batches = NULL,
M = 5,
gamma = 7,
alpha = 1,
rho = 1,
coords = NULL,
useDegree = FALSE,
momentum = NULL,
seed = NULL,
threads = NULL,
verbose = getOption("verbose", TRUE)
)
Arguments
wij |
A symmetric sparse matrix of edge weights, in C-compressed format, as created with the |
dim |
numeric The number of dimensions for the projection space (default=2) |
sgd_batches |
numeric The number of edges to process during SGD (default=NULL). Defaults to a value set based on the size of the dataset. If the parameter given is
between |
M |
numeric (largeVis) The number of negative edges to sample for each positive edge (default=5). |
gamma |
numeric (largeVis) The strength of the force pushing non-neighbor nodes apart (default=7). |
alpha |
numeric (largeVis) The hyperparameter in the distance function (default=1). The default distance function, |
rho |
(largeVis) numeric Initial learning rate (default=1) |
coords |
An initialized coordinate matrix (default=NULL) |
useDegree |
boolean Whether to use vertex degree to determine weights in negative sampling (if TRUE) or the sum of the vertex's edges (if FALSE) (default=FALSE) |
momentum |
If not NULL, SGD with momentum is used, with this multiplier, which must be between 0 and 1 (default=NULL). Note that momentum can drastically speed-up training time, at the cost of additional memory consumed. |
seed |
numeric Random seed to be passed to the C++ functions (default=NULL). Sampled from hardware entropy pool if |
threads |
numeric The maximum number of threads to spawn (default=NULL). Determined automatically if |
verbose |
boolean Verbosity (default=getOption("verbose", TRUE)) |
Details
The algorithm attempts to estimate a dim
-dimensional embedding using stochastic gradient descent and
negative sampling.
The objective function is:
O = \sum_{(i,j)\in E} w_{ij} (\log f(||p(e_{ij} = 1||) + \sum_{k=1}^{M} E_{jk~P_{n}(j)} \gamma \log(1 - f(||p(e_{ij_k} - 1||)))
where f()
is a probabilistic function relating the distance between two points in the low-dimensional projection space,
and the probability that they are nearest neighbors.
The default probabilistic function is 1 / (1 + \alpha \dot ||x||^2)
. If \alpha
is set to zero,
an alternative probabilistic function, 1 / (1 + \exp(x^2))
will be used instead.
Note that the input matrix should be symmetric. If any columns in the matrix are empty, the function will fail.
Value
A dense [N,D] matrix of the coordinates projecting the w_ij matrix into the lower-dimensional space.
Note
If specified, seed
is passed to the C++ and used to initialize the random number generator. This will not, however, be
sufficient to ensure reproducible results, because the initial coordinate matrix is generated using the R
random number generator.
To ensure reproducibility, call set.seed
before calling this function, or pass it a pre-allocated coordinate matrix.
The original paper called for weights in negative sampling to be calculated according to the degree of each vertex, the number of edges connecting to the vertex. The reference implementation, however, uses the sum of the weights of the edges to each vertex. In experiments, the difference was imperceptible with small (MNIST-size) datasets, but the results seems aesthetically preferrable using degree. The default is to use the edge weights, consistent with the reference implementation.