Changes

quanteda 2.0 introduces some major changes, detailed here.

  1. New corpus object structure.

    The internals of the corpus object have been redesigned, and now are based around a character vector with meta- and system-data in attributes. These are all updated to work with the existing extractor and replacement functions. If you were using these before, then you should not even notice the change. Docvars are now handled separately from the texts, in the same way that docvars are handled for tokens objects.

  2. New metadata handling.

    Corpus-level metadata is now inserted in a user metadata list via meta() and meta<-(). metacorpus() is kept as a synonym for meta(), for backwards compatibility. Additional system-level corpus information is also recorded, but automatically when an object is created.

    Document-level metadata is deprecated, and now all document-level information is simply a “docvar”. For backward compatibility, metadoc() is kept and will insert document variables (docvars) with the name prefixed by an underscore.

  3. Corpus objects now store default summary statistics for efficiency. When these are present, summary.corpus() retrieves them rather than computing them on the fly.

  4. New index operators for core objects. The main change here is to redefine the $ operator for corpus, tokens, and dfm objects (all objects that retain docvars) to allow this operator to access single docvars by name. Some other index operators have been redefined as well, such as [.corpus returning a slice of a corpus, and [[.corpus returning the texts from a corpus.

    See the full details at https://github.com/quanteda/quanteda/wiki/indexing_core_objects.

  5. *_subset() functions.

    The subset argument now must be logical, and the select argument has been removed. (This is part of base::subset() but has never made sense, either in quanteda or base.)

  6. Return format from textstat_simil() and textstat_dist().

    Now defaults to a sparse matrix from the Matrix package, but coercion methods are provided for as.data.frame(), to make these functions return a data.frame just like the other textstat functions. Additional coercion methods are provided for as.dist(), as.simil(), and as.matrix().

  7. settings functions (and related slots and object attributes) are gone. These are now replaced by a new meta(x, type = "object") that records object-specific meta-data, including settings such as the n for tokens (to record the ngrams).

  8. All included data objects are upgraded to the new formats. This includes the three corpus objects, the single dfm data object, and the LSD 2015 dictionary object.

  9. New print methods for core objects (corpus, tokens, dfm, dictionary) now exist, each with new global options to control the number of documents shown, as well as the length of a text snippet (corpus), the tokens (tokens), dfm cells (dfm), or keys and values (dictionary). Similar to the extended printing options for dfm objects, printing of corpus objects now allows for brief summaries of the texts to be printed, and for the number of documents and the length of the previews to be controlled by new global options.

  10. All textmodels and related functions have been moved to a new package quanteda.textmodels. This makes them easier to maintain and update, and keeps the size of the core package down.

  11. quanteda v2 implements major changes to the tokens() constructor. These are designed to simplify the code and its maintenance in quanteda, to allow users to work with other (external) tokenizers, and to improve consistency across the tokens processing options. Changes include:

    • A new method tokens.list(x, ...) constructs a tokens object from named list of characters, allowing users to tokenize texts using some other function (or package) such as tokenize_words(), tokenize_sentences(), or tokenize_tweets() from the tokenizers package, or the list returned by spacyr::spacy_tokenize(). This allows users to use their choice of tokenizer, as long as it returns a named list of characters. With tokens.list(), all tokens processing (remove_*) options can be applied, or the list can be converted directly to a tokens object without processing using as.tokens.list().

    • All tokens options are now intervention options, to split or remove things that by default are not split or removed. All remove_* options to tokens() now remove them from tokens objects by calling tokens.tokens(), after constructing the object. “Pre-processing” is now actually post-processing using tokens_*() methods internally, after a conservative tokenization on token boundaries. This both improves performance and improves consistency in handling special characters (e.g. Twitter characters) across different tokenizer engines. (#1503, #1446, #1801)

    Note that tokens.tokens() will remove what is found, but cannot “undo” a removal – for instance it cannot replace missing punctuation characters if these have already been removed.

    • The option remove_hyphens is removed and deprecated, but replaced by split_hyphens. This preserves infix (internal) hyphens rather than splitting them. This behaviour is implemented in both the what = "word" and what = "word2" tokenizer options. This option is FALSE by default.

    • The option remove_twitter has been removed. The new what = "word" is a smarter tokenizer that preserves social media tags, URLs, and email-addresses. “Tags” are defined as valid social media hashtags and usernames (using Twitter rules for validity) rather than removing the # and @ punctuation characters, even if remove_punct = TRUE.

New features

  • Changed the default value of the size argument in dfm_sample() to the number of features, not the number of documents. (#1643)
  • Fixes a few CRAN-related issues (compiler warnings on Solaris and encoding warnings on r-devel-linux-x86_64-debian-clang.)
  • Added startpos and endpos arguments to tokens_select(), for selecting on token positions relative to the start or end of the tokens in each document. (#1475)
  • Added a convert() method for corpus objects, to convert them into data.frame or json formats.
  • Added a spacy_tokenize() method for corpus objects, to provide direct access via the spacyr package.

Behaviour changes

  • Added a force = TRUE option and error checking for the situations of applying dfm_weight() or dfm_group() to a dfm that has already been weighted. (#1545) The function textstat_frequency() now allows passing this argument to dfm_group() via .... (#1646)
  • textstat_frequency() now has a new argument for resolving ties when ranking term frequencies, defaulting to the “min” method. (#1634)
  • New docvars accessor and replacement functions are available for corpus, tokens, and dfm objects via $. (See Index Operators for Core Objects above.)
  • textstat_entropy() now produces a data.frame that is more consistent with other textstat methods. (#1690)

Bug fixes and stability enhancements

  • docnames now enforced to be character (formerly, could be numeric for some objects).
  • docnames are now enforced to be strictly unique for all object classes.
  • Grouping operations in tokens_group() and dfm_group() are more robust to using multiple grouping variables, and preserve these correctly as docvars in the new dfm. (#1809)
  • Some fixes to documented … objects in two functions that were previously causing CRAN check failures on the release of 1.5.2.

Other improvements

  • All of the (three) included corpus objects have been cleaned up and augmented with improved meta-data and docvars. The inaugural speech corpus, for instance, now includes the President’s political party affiliation.

Changes

  • Moved data_corpus_irishbudget2010 and data_corpus_dailnoconf1991 to the quanteda.textmodels package.
  • Em dashes and double dashes between words, whether surrounded by a space or not, are now converted to ” - ” to distinguish them from infix hyphens. (#1889)
  • Verbose output for dfm and tokens creation is now corrected and more consistent. (#1894)

Bug fixes and stability enhancements

  • Number removal is now both improved and fixed (#1909).
  • Fixed an issue causing CRAN errors in pre-v4, related to the new default of stringsAsFactors = FALSE for data.frame objects.
  • An error in the print method for dfm objects is now fixed (#1897)
  • Fixed a bug in tokens_replace() when the pattern was not matched (#1895)
  • Fixed the names of dimensions not exchanging when a dfm was transposed (#1903)