Reading List for Text Representation

  1. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. ICLR’19 Alex Wang1 , Amanpreet Singh1 , Julian Michael2 , Felix Hill3 , Omer Levy2 & Samuel R. Bowman1 Local copy
  2. Multi-Task Deep Neural Networks for Natural Language Understanding. ACL’19 Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao. Local copy
  3. Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness. Yilun Zhou, Steven Schockaert, Julie A. Shah. Local copy
  4. Revealing the Dark Secrets of BERT. Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky Local copy
  5. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova. Local copy
  6. Linguistic Knowledge and Transferability of Contextual Representations Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, Noah A. Smith. NAACL’19. Local copy
  7. LSTMs Exploit Linguistic Attributes of Data. Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, Noah A. Smith. ACL’18 Proceedings of the 3rd Workshop on Representation Learning for NLP. Local copy

Yisong’s Comments

  1. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. ICLR’19 Alex Wang1 , Amanpreet Singh1 , Julian Michael2 , Felix Hill3 , Omer Levy2 & Samuel R. Bowman1 Local copy

    This paper has a few takeaways summarized by me:

    However, this model still achieves a fairly low absolute score. Analysis with our diagnostic dataset reveals that our baseline models deal well with strong lexical signals but struggle with deeper logical structure.