This tool provides an efficient implementation of the continuous bag-of-words and skip-gram architectures for computing vector representations of words. These representations can be subsequently used in many natural language processing applications and for further research. The word2vec tool takes a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words. The resulting word vector file can be used as features in many natural language processing and machine learning applications. ..

References in zbMATH (referenced in 19 articles )

Showing results 1 to 19 of 19.
Sorted by year (citations)

  1. Gallé, Matthias; Tealdi, Matías: $xkcd$-repeats: A new taxonomy of repeats defined by their context diversity (2018)
  2. Baba, Kensuke: An acceleration of FFT-based algorithms for the match-count problem (2017)
  3. Chen, Peixian; Zhang, Nevin L.; Liu, Tengfei; Poon, Leonard K.M.; Chen, Zhourong; Khawar, Farhan: Latent tree models for hierarchical topic detection (2017)
  4. Dediu, Adrian-Horia; M.Matos, Joana; Martín-Vide, Carlos: Natural language processing, moving from rules to data (2017)
  5. Lauly, Stanislas; Zheng, Yin; Allauzen, Alexandre; Larochelle, Hugo: Document neural autoregressive distribution estimation (2017)
  6. Papyan, Vardan; Romano, Yaniv; Elad, Michael: Convolutional neural networks analyzed via convolutional sparse coding (2017)
  7. Tian, Ran; Okazaki, Naoaki; Inui, Kentaro: The mechanism of additive composition (2017)
  8. van Gerven, Marcel A.J.: A primer on encoding models in sensory neuroscience (2017)
  9. Zeng, An; Shen, Zhesi; Zhou, Jianlin; Wu, Jinshan; Fan, Ying; Wang, Yougui; Stanley, H.Eugene: The science of science: from the perspective of complex systems (2017)
  10. Agerri, Rodrigo; Rigau, German: Robust multilingual named entity recognition with shallow semi-supervised features (2016) ioport
  11. Camacho-Collados, José; Pilehvar, Mohammad Taher; Navigli, Roberto: Nasari: integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities (2016)
  12. Chen, Wenliang; Zhang, Min; Zhang, Yue; Duan, Xiangyu: Exploiting meta features for dependency parsing and part-of-speech tagging (2016)
  13. Gallay, Ladislav; Šimko, Marián: Utilizing vector models for automatic text lemmatization (2016)
  14. Joshi, Shalmali; Ghosh, Joydeep; Reid, Mark; Koyejo, Oluwasanmi: Rényi divergence minimization based co-regularized multiview clustering (2016)
  15. McQueen, James; Meilä, Marina; VanderPlas, Jacob; Zhang, Zhongyue: Megaman: scalable manifold learning in python (2016)
  16. Bengio, Yoshua (ed.): Editorial introduction to the neural networks special issue on deep learning of representations (2015)
  17. Derrac, Joaquín; Schockaert, Steven: Inducing semantic relations from conceptual spaces: a data-driven approach to plausible reasoning (2015)
  18. Dhillon, Paramveer S.; Foster, Dean P.; Ungar, Lyle H.: Eigenwords: spectral word embeddings (2015)
  19. Pilehvar, Mohammad Taher; Navigli, Roberto: From senses to texts: an all-in-one graph-based approach for measuring semantic similarity (2015)