RoBERTa

RoBERTa: A Robustly Optimized BERT Pretraining Approach. RoBERTa iterates on BERT’s pretraining procedure, including training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data. See the associated paper for more details.


References in zbMATH (referenced in 11 articles )

Showing results 1 to 11 of 11.
Sorted by year (citations)

  1. Juan Manuel Pérez, Juan Carlos Giudici, Franco Luque: pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks (2021) arXiv
  2. Sai Muralidhar Jayanthi, Kavya Nerella, Khyathi Raghavi Chandu, Alan W Black: CodemixedNLP: An Extensible and Open NLP Toolkit for Code-Mixing (2021) arXiv
  3. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, Shujie Liu: CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation (2021) arXiv
  4. Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, Zexiong Pang, Yongxin Zhang, Zhengyan Li, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xinwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Bolin Zhu, Shan Qin, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuanjing Huang: TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing (2021) arXiv
  5. Jaap Jumelet: diagNNose: A Library for Neural Activation Analysis (2020) arXiv
  6. Pieter Delobelle, Thomas Winters, Bettina Berendt: RobBERT: a Dutch RoBERTa-based Language Model (2020) arXiv
  7. Yada Pruksachatkun, Phil Yeres, Haokun Liu, Jason Phang, Phu Mon Htut, Alex Wang, Ian Tenney, Samuel R. Bowman: jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models (2020) arXiv
  8. Yao Wan, Yang He, Jian-Guo Zhang, Yulei Sui, Hai Jin, Guandong Xu, Caiming Xiong, Philip S. Yu: NaturalCC: A Toolkit to Naturalize the Source Code Corpus (2020) arXiv
  9. Nils Reimers, Iryna Gurevych: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks (2019) arXiv
  10. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu: TinyBERT: Distilling BERT for Natural Language Understanding (2019) arXiv
  11. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le: XLNet: Generalized Autoregressive Pretraining for Language Understanding (2019) arXiv