library(quanteda)

This is intended to show how quanteda can be used with the text2vec package in order to replicate its gloVe example.

Download a corpus comprised of the texts used in the text2vec vignette:

wiki_corp <- quanteda.corpora::download(url = "https://www.dropbox.com/s/9mubqwpgls3qi9t/data_corpus_wiki.rds?dl=1")

Select features

First, we tokenize the corpus, and then get the names of the features that occur five times or more. Trimming the features before constructing the fcm:

wiki_toks <- tokens(wiki_corp)
feats <- dfm(wiki_toks, verbose = TRUE) %>%
dfm_trim(min_termfreq = 5) %>%
featnames()
## Creating a dfm from a tokens input...
##    ... lowercasing
##    ... found 1 document, 253,854 features
##    ... created a 1 x 253,854 sparse dfm
##    ... complete.
## Elapsed time: 2.3 seconds.
# leave the pads so that non-adjacent words will not become adjacent
wiki_toks <- tokens_select(wiki_toks, feats, padding = TRUE)

Construct the feature co-occurrence matrix

wiki_fcm <- fcm(wiki_toks, context = "window", count = "weighted", weights = 1 / (1:5), tri = TRUE)

Fit word embedding model

Fit the GloVe model using text2vec.

library(text2vec)

GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space.

GloVe encodes the ratios of word-word co-occurrence probabilities, which is thought to represent some crude form of meaning associated with the abstract concept of the word, as vector difference. The training objective of GloVe is to learn word vectors such that their dot product equals the logarithm of the words’ probability of co-occurrence.

glove <- GlobalVectors$new(word_vectors_size = 50, vocabulary = featnames(wiki_fcm), x_max = 10) wiki_main <- fit_transform(wiki_fcm, glove, n_iter = 20) ## INFO [2019-07-07 19:41:09] 2019-07-07 19:41:09 - epoch 1, expected cost 0.0829 ## INFO [2019-07-07 19:41:16] 2019-07-07 19:41:16 - epoch 2, expected cost 0.0613 ## INFO [2019-07-07 19:41:21] 2019-07-07 19:41:21 - epoch 3, expected cost 0.0539 ## INFO [2019-07-07 19:41:27] 2019-07-07 19:41:27 - epoch 4, expected cost 0.0499 ## INFO [2019-07-07 19:41:33] 2019-07-07 19:41:33 - epoch 5, expected cost 0.0474 ## INFO [2019-07-07 19:41:39] 2019-07-07 19:41:39 - epoch 6, expected cost 0.0456 ## INFO [2019-07-07 19:41:48] 2019-07-07 19:41:48 - epoch 7, expected cost 0.0443 ## INFO [2019-07-07 19:41:57] 2019-07-07 19:41:57 - epoch 8, expected cost 0.0433 ## INFO [2019-07-07 19:42:03] 2019-07-07 19:42:03 - epoch 9, expected cost 0.0424 ## INFO [2019-07-07 19:42:09] 2019-07-07 19:42:09 - epoch 10, expected cost 0.0417 ## INFO [2019-07-07 19:42:15] 2019-07-07 19:42:15 - epoch 11, expected cost 0.0411 ## INFO [2019-07-07 19:42:21] 2019-07-07 19:42:21 - epoch 12, expected cost 0.0406 ## INFO [2019-07-07 19:42:26] 2019-07-07 19:42:26 - epoch 13, expected cost 0.0401 ## INFO [2019-07-07 19:42:32] 2019-07-07 19:42:32 - epoch 14, expected cost 0.0397 ## INFO [2019-07-07 19:42:38] 2019-07-07 19:42:38 - epoch 15, expected cost 0.0393 ## INFO [2019-07-07 19:42:43] 2019-07-07 19:42:43 - epoch 16, expected cost 0.0390 ## INFO [2019-07-07 19:42:50] 2019-07-07 19:42:50 - epoch 17, expected cost 0.0387 ## INFO [2019-07-07 19:42:56] 2019-07-07 19:42:56 - epoch 18, expected cost 0.0385 ## INFO [2019-07-07 19:43:02] 2019-07-07 19:43:02 - epoch 19, expected cost 0.0382 ## INFO [2019-07-07 19:43:08] 2019-07-07 19:43:08 - epoch 20, expected cost 0.0380 Averaging learned word vectors The two vectors are main and context. According to the Glove paper, averaging the two word vectors results in more accurate representation. wiki_context <- glove$components
dim(wiki_context)
## [1]    50 71290
wiki_vectors <- as.dfm(wiki_main + t(wiki_context))

Examining term representations

Now we can find the closest word vectors for paris - france + germany

new_berlin <- wiki_vectors["paris", ] -
wiki_vectors["france", ] +
wiki_vectors["germany", ]

# calculate the similarity
cos_sim <- textstat_simil(wiki_vectors, new_berlin,
margin = "documents", method = "cosine")
head(sort(cos_sim[, 1], decreasing = TRUE), 5)
##     paris      bonn   germany    berlin    vienna
## 0.7682481 0.7002557 0.6795966 0.6790626 0.6522549

Here is another example for london = paris - france + uk + england

new_london <-  wiki_vectors["paris", ] -
wiki_vectors["france", ] +
wiki_vectors["uk", ] +
wiki_vectors["england", ]

cos_sim <- textstat_simil(wiki_vectors, new_london,
margin = "documents", method = "cosine")
head(sort(cos_sim[, 1], decreasing = TRUE), 5)
##        uk    london   england        at      york
## 0.7784685 0.7483682 0.7380975 0.7348357 0.7289596