Robust Text Perturbation using Sequence-to-Sequence Pre-TrainingNishtha MadaanDiptikalyan Sahaet al.2021NeurIPS 2021
Identification of Enzymatic Active Sites with Unsupervised Language ModellingLoic Kwate DassiMatteo Manicaet al.2021NeurIPS 2021
Leveraging Adversarial Reprogramming for Novel Structure-constrained Protein Sequence DesignDevleena DasInkit Padhiet al.2021NeurIPS 2021
Fair Data Generation using Language Models with Hard ConstraintsSK Mainul IslamAbhinav Nagpalet al.2021NeurIPS 2021
Grapher: Multi-Stage Knowledge Graph Construction using Pretrained Language ModelsIgor MelnykPierre Dogninet al.2021NeurIPS 2021
Role of Language Relatedness in Multilingual Fine-tuning of Language Models: A Case Study in Indo-Aryan LanguagesTejas Indulal DhamechaRudra Murthy Venkataramanaet al.2021EMNLP 2021
End-to-End Learning of Flowchart Grounded Task-Oriented DialogsDinesh RaghuShantanu Agarwalet al.2021EMNLP 2021
Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR ParsingJiawei ZhouTahira Naseemet al.2021EMNLP 2021
MultiDoc2Dial: Modeling Dialogues Grounded in Multiple DocumentsSong FengSiva Sankalp Patelet al.2021EMNLP 2021