References in zbMATH (referenced in 15 articles , 1 standard article )

Showing results 1 to 15 of 15.
Sorted by year (citations)

  1. Bihlo, Alex; Popovych, Roman O.: Physics-informed neural networks for the shallow-water equations on the sphere (2022)
  2. Immanuel Trummer: CodexDB: Generating Code for Processing SQL Queries using GPT-3 Codex (2022) arXiv
  3. Liu, Chaoyue; Zhu, Libin; Belkin, Mikhail: Loss landscapes and optimization in over-parameterized non-linear systems and neural networks (2022)
  4. Liu, Ruibo; Jia, Chenyan; Wei, Jason; Xu, Guangxuan; Vosoughi, Soroush: Quantifying and alleviating political bias in language models (2022)
  5. Loureiro, Daniel; Mário Jorge, Alípio; Camacho-Collados, Jose: LMMS reloaded: transformer-based sense embeddings for disambiguation and beyond (2022)
  6. Ras, Gabrielle; Xie, Ning; van Gerven, Marcel; Doran, Derek: Explainable deep learning: a field guide for the uninitiated (2022)
  7. Ye, Jong Chul: Geometry of deep learning. A signal processing perspective (2022)
  8. Bakhtin, Anton; Deng, Yuntian; Gross, Sam; Ott, Myle; Ranzato, Marc’aurelio; Szlam, Arthur: Residual energy-based models for text (2021)
  9. Celledoni, Elena; Ehrhardt, Matthias J.; Etmann, Christian; Owren, Brynjulf; Schönlieb, Carola-Bibiane; Sherry, Ferdia: Equivariant neural networks for inverse problems (2021)
  10. Grabovoy, A. V.; Strijov, V. V.: Bayesian distillation of deep learning models (2021)
  11. Jurewicz, Mateusz; Derczynski, Leon: Set-to-sequence methods in machine learning: a review (2021)
  12. Lin, Licong; Dobriban, Edgar: What causes the test error? Going beyond bias-variance via ANOVA (2021)
  13. Silver, David; Singh, Satinder; Precup, Doina; Sutton, Richard S.: Reward is enough (2021)
  14. Tripathy, Jatin Karthik; Sethuraman, Sibi Chakkaravarthy; Cruz, Meenalosini Vimal; Namburu, Anupama; P., Mangalraj; R., Nandha Kumar; S., Sudhakar Ilango; Vijayakumar, Vaidehi: Comprehensive analysis of embeddings and pre-training in NLP (2021)
  15. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.: Language Models are Few-Shot Learners (2020) arXiv