NYU will be well represented at NAACL in Minneapolis early June, one of the three major conferences in NLP and computational linguistics. We'll see four papers at the main conference:
Studying the inductive biases of RNNs with synthetic variations of natural languages
Shauli Ravfogel, Yoav Goldberg and Tal Linzen (NYU Linguistics PhD '15)
Subword-Level Language Identification for Intra-Word Code-Switching
Manuel Mager, Özlem Çetinoğlu and Katharina Kann (NYU Data Science)
On the Idiosyncrasies of the Mandarin Chinese Classifier System
Shijia Liu, Hongyuan Mei, Adina Williams (NYU Linguistics PhD '18) and Ryan Cotterell
On Measuring Social Biases in Sentence Encoders
Chandler May, Alex Wang (NYU CS), Shikha Bordia (NYU CS), Samuel R. Bowman (NYU Linguistics) and Rachel Rudinger
In addition, Sam Bowman will give a tutorial mini-course on natural language inference and an invited talk at the attached *SEM computational semantics mini-conference, and Alex Wang (NYU CS), and Shikha Bordia (NYU CS) will both present additional papers at attached workshops.
In related news, we saw three NYU Linguistics-connected papers appear at the International Conference on Learning Representations, a major conference on artificial intelligence and machine learning, which took place in New Orleans earlier this month:
RNNs Implicitly Implement Tensor Product Representations. International Conference on Learning Representations R. Thomas McCoy, Tal Linzen (NYU Linguistics PhD '15), Ewan Dunbar & Paul Smolensky
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding Alex Wang (NYU CS), Amanpreet Singh (NYU CS), Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman (NYU Linguistics).
What do you learn from context? Probing for sentence structure in contextualized word representations
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang (NYU CS), Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman (NYU Linguistics), Dipanjan Das, and Ellie Pavlick.