Precomputed Word Embeddings for 15+ Languages

Varování

Publikace nespadá pod Filozofickou fakultu, ale pod Fakultu informatiky. Oficiální stránka publikace je na webu muni.cz.
Autoři

HERMAN Ondřej

Rok publikování 2021
Druh Článek ve sborníku
Konference Recent Advances in Slavonic Natural Language Processing (RASLAN 2021)
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www
Klíčová slova Word embeddings; Sketch Engine; Corpora
Popis Word embeddings serve as an useful resource for many downstream natural language processing tasks. The embeddings map or embed the lexicon of a language onto a vector space, in which various operations can be carried out easily using the established machinery of linear algebra. The unbounded nature of the language can be problematic and word embeddings provide a way of compressing the words into a manageable dense space. The position of a word in the vector space is given by the context the word appears in, or, as the distributional hypothesis postulates, a word is characterized by the company it keeps [2]. As similar words appear in similar contexts, their positions will also be close to each other in the embedding vector space. Because of this many useful semantical properties of words are preserved in the embedding vector space.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.