This function retrieves stopwords from the type specified in the kind argument and returns the stopword list as a character vector The default is English.

stopwords(kind = "english")

Arguments

kind

The pre-set kind of stopwords (as a character string). Allowed values are english, SMART, danish, french, greek, hungarian, norwegian, russian, swedish, catalan, dutch, finnish, german, italian, portuguese, spanish, arabic

Value

a character vector of stopwords

Details

The stopword list is an internal data object named data_char_stopwords, which consists of English stopwords from the SMART information retrieval system (obtained from http://jmlr.csail.mit.edu/papers/volume5/lewis04a/a11-smart-stop-list/english.stop) and a set of stopword lists from the Snowball stemmer project in different languages (see http://snowballstem.org/projects.html). Supported languages are Arabic, Danish, Dutch, English, Finnish, French, German, Greek, Hungarian, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish. Language names should be lower-case (except for "SMART" -- see below) and are case sensitive.

A note of caution

Stop words are an arbitrary choice imposed by the user, and accessing a pre-defined list of words to ignore does not mean that it will perfectly fit your needs. You are strongly encourged to inspect the list and to make sure it fits your particular requirements. The built-in English stopword list does not contain "will", for instance, because of its multiple meanings, but you might want to include this word for your own application.

Examples

head(stopwords("english"))
#> [1] "i" "me" "my" "myself" "we" "our"
head(stopwords("italian"))
#> [1] "ad" "al" "allo" "ai" "agli" "all"
head(stopwords("arabic"))
#> [1] "فى" "في" "كل" "لم" "لن" "له"
head(stopwords("SMART"))
#> [1] "a" "a's" "able" "about" "above" "according"
# adding to the built-in stopword list toks <- tokenize("The judge will sentence Mr. Adams to nine years in prison", remove_punct = TRUE) removeFeatures(toks, c(stopwords("english"), "will", "mr", "nine"))
#> tokenizedTexts from 1 document. #> Component 1 : #> [1] "judge" "sentence" "Adams" "years" "prison" #>