Leo Liu(@ZEYULIU10) 's Twitter Profile Photo

I am excited to introduce our EMNLP Finding paper ``Probing Across Time: What Does RoBERTa Know and When?”. We investigate the learning dynamics of RoBERTa with a diverse set of probes --- linguistics, factual, commonsense, and reasoning.

#EMNLP2021 I am excited to introduce our EMNLP Finding paper ``Probing Across Time: What Does RoBERTa Know and When?”. We investigate the learning dynamics of RoBERTa with a diverse set of probes --- linguistics, factual, commonsense, and reasoning.
account_circle
Ruiqi Zhong(@ZhongRuiqi) 's Twitter Profile Photo

We can prompt language models for 0-shot learning ... but it's not what they are optimized for😢.

Our paper proposes a straightforward fix: 'Adapting LMs for 0-shot Learning by Meta-tuning on Dataset and Prompt Collections'.

Many Interesting takeaways below 👇

We can prompt language models for 0-shot learning ... but it's not what they are optimized for😢. 

Our #emnlp2021 paper proposes a straightforward fix: 'Adapting LMs for 0-shot Learning by Meta-tuning on Dataset and Prompt Collections'. 

Many Interesting takeaways below 👇
account_circle
Zexuan Zhong(@ZexuanZhong) 's Twitter Profile Photo

Dense retrieval models (e.g. DPR) achieve SOTA on various datasets. Does this really mean dense models are better than sparse models (e.g. BM25)?
No! Our paper shows dense retrievers even fail to answer simple entity-centric questions.

arxiv.org/abs/2109.08535 (1/6)

Dense retrieval models (e.g. DPR) achieve SOTA on various datasets. Does this really mean dense models are better than sparse models (e.g. BM25)? 
No! Our #EMNLP2021 paper shows dense retrievers even fail to answer simple entity-centric questions.

arxiv.org/abs/2109.08535  (1/6)
account_circle
Csordás Róbert(@robert_csordas) 's Twitter Profile Photo

I'm happy to announce that our paper 'The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers' has been accepted to !

paper: arxiv.org/abs/2108.12284
code: github.com/robertcsordas/…

1/4

I'm happy to announce that our paper 'The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers' has been accepted to #EMNLP2021!

paper: arxiv.org/abs/2108.12284
code: github.com/robertcsordas/…

1/4
account_circle
Elizabeth Salesky(@esalesk) 's Twitter Profile Photo

Our work on visual text representations will be presented at !

Rather than unicode-based character or subword representations, we render text as images for translation, improving robustness (see 🧵).

Paper 📝: arxiv.org/abs/2104.08211
Code ⌨ : github.com/esalesky/visrep

Our work on visual text representations will be presented at #EMNLP2021!

Rather than unicode-based character or subword representations, we render text as images for translation, improving robustness (see 🧵).

Paper 📝: arxiv.org/abs/2104.08211
Code ⌨ : github.com/esalesky/visrep
account_circle
Sam Wiseman(@_samwiseman) 's Twitter Profile Photo

Newish work w/ Arturs Backurs & Karl Stratos: we try to generate text (in a data-to-text setting) by splicing together pieces of retrieved neighbor text.

Paper: arxiv.org/pdf/2101.08248…

1/3

Newish #EMNLP2021 work w/ Arturs Backurs & Karl Stratos: we try to generate text (in a data-to-text setting) by splicing together pieces of retrieved neighbor text.

Paper: arxiv.org/pdf/2101.08248…

1/3
account_circle
Will Timkey(@wtimkey8) 's Twitter Profile Photo

Have you or a loved one used similarity measures like cosine similarity or L2 distance in transformer LMs?

Our paper shows that a few 'rogue' dimensions consistently break sim. metrics in these models.
Luckily, there are some easy fixes (🧵)
arxiv.org/abs/2109.04404

Have you or a loved one used similarity measures like cosine similarity or L2 distance in transformer LMs?

Our #EMNLP2021 paper shows that a few 'rogue' dimensions consistently break sim. metrics in these models.
Luckily, there are some easy fixes (🧵)
arxiv.org/abs/2109.04404
account_circle
Goro Kobayashi(@goro_koba) 's Twitter Profile Photo

Our paper analyzed masked LMs🔬considering residual connection and layer normalization in addition to attention.
Analysis revealed
- Attention has less impact than previously assumed
- BERT’s working relates to word frequency
- and so on!
📄 arxiv.org/abs/2109.07152

Our #emnlp2021 paper analyzed masked LMs🔬considering residual connection and layer normalization in addition to attention.
Analysis revealed
- Attention has less impact than previously assumed
- BERT’s working relates to word frequency
- and so on!
📄 arxiv.org/abs/2109.07152
account_circle
Caleb Ziems(@cjziems) 's Twitter Profile Photo

Politics can skew the news and shape our understanding of big issues. But how do some details change how we feel about key actors? Our Findings (w/ Diyi Yang) answers this with computational analysis of 82k articles on police violence arxiv.org/pdf/2109.05325…

[1/9]

Politics can skew the news and shape our understanding of big issues. But how do some details change how we feel about key actors? Our #EMNLP2021 Findings (w/ @Diyi_Yang) answers this with computational analysis of 82k articles on police violence arxiv.org/pdf/2109.05325…
 
[1/9]
account_circle
Dennis Ulmer (is on the job market 👨🏻‍💻)(@dnnslmr) 's Twitter Profile Photo

Hey! I wrote a blog post about the robustness of transformers in . I give an overview over three papers, discussing everything from spelling errors to shuffled word order and distributional shift, with some surprising findings!

dennisulmer.eu/how-robust-are…

Hey! I wrote a blog post about the robustness of transformers in #NLProc. I give an overview over three #EMNLP2021 papers, discussing everything from spelling errors to shuffled word order and distributional shift, with some surprising findings!

dennisulmer.eu/how-robust-are…
account_circle
Swarnadeep Saha(@swarnaNLP) 's Twitter Profile Photo

ExplaGraphs (to be presented at ): Check out our website & new version with more+refined graph data, new structured models, new metrics (like graph-editdistance + graph-bertscore) & human eval + human-metric correlation😀

explagraphs.github.io
arxiv.org/abs/2104.07644

ExplaGraphs (to be presented at #EMNLP2021): Check out our website & new version with more+refined graph data, new structured models, new metrics (like graph-editdistance + graph-bertscore) & human eval + human-metric correlation😀

explagraphs.github.io
arxiv.org/abs/2104.07644
account_circle
梶原智之(@moguranosenshi) 's Twitter Profile Photo

愛媛大学DS研究セミナー、第15回は私が「言語非依存の文の符号化と類似度推定」という内容でお話します。EMNLP2021で発表した言語横断の文間類似度推定(機械翻訳の教師なし品質推定)の研究と、その続編の2件の研究について紹介する予定です。参加登録よろしくお願いします。
cdse.ehime-u.ac.jp/DS_Seminar/DS_…

愛媛大学DS研究セミナー、第15回は私が「言語非依存の文の符号化と類似度推定」という内容でお話します。EMNLP2021で発表した言語横断の文間類似度推定(機械翻訳の教師なし品質推定)の研究と、その続編の2件の研究について紹介する予定です。参加登録よろしくお願いします。
cdse.ehime-u.ac.jp/DS_Seminar/DS_…
account_circle
Shayne Longpre(@ShayneRedford) 's Twitter Profile Photo

📢📜 🌟Knowledge Conflicts in QA🌟- what happens when facts learned in training contradict facts given at inference time? 🤔

How can we mitigate hallucination + improve OOD generalization? 📈

Find out in our paper! [1/n]

arxiv.org/abs/2109.05052

📢📜#NLPaperAlert 🌟Knowledge Conflicts in QA🌟- what happens when facts learned in training contradict facts given at inference time? 🤔
 
How can we mitigate hallucination + improve OOD generalization? 📈
 
Find out in our #EMNLP2021 paper! [1/n]
 
arxiv.org/abs/2109.05052
account_circle
VivekK(@viveksck) 's Twitter Profile Photo

How can you incorporate social factors (for eg. time, geography) which influence language use and understanding into large-scale LM's? With Shubhanshu Mishra and Aria Haghighi, we propose a simple pre-training method for this. arxiv.org/abs/2110.10319 (Findings of EMNLP 2021)

How can you incorporate social factors (for eg. time, geography) which influence language use and understanding into large-scale LM's? With @TheShubhanshu and @aria42, we propose a simple pre-training method for this. arxiv.org/abs/2110.10319 (Findings of EMNLP 2021) #emnlp2021
account_circle
Daniel Fried(@dan_fried) 's Twitter Profile Photo

We built a pragmatic, grounded dialogue system that improves pretty substantially in interactions with people in a challenging grounded coordination game. Real system example below! Work with Justin Chiu and Dan Klein, upcoming at .

Paper: arxiv.org/abs/2109.05042

We built a pragmatic, grounded dialogue system that improves pretty substantially in interactions with people in a challenging grounded coordination game. Real system example below! Work with Justin Chiu and Dan Klein, upcoming at #EMNLP2021.

Paper: arxiv.org/abs/2109.05042
account_circle
Badr M. Abdullah 🇾🇪(@badr_nlp) 's Twitter Profile Photo

📢 Interested in speech, multilinguality, and NN spaces?

Our paper 'How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings' is coming out in

📝arxiv.org/pdf/2109.10179…
🐍github.com/uds-lsv/xRSA-A…

1/🧵

📢 Interested in speech, multilinguality, and NN spaces?

Our paper 'How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings' is coming out in #BlackboxNLP #EMNLP2021

📝arxiv.org/pdf/2109.10179…
🐍github.com/uds-lsv/xRSA-A…

1/🧵
account_circle
Tiago Pimentel(@tpimentelms) 's Twitter Profile Photo

A surprisal–duration trade-off across and within the world’s languages!
Analysing 600 languages, we find evidence of this trade-off: cross-linguistically; and within 319 of them.
We conclude less surprising phones are produced faster.


arxiv.org/abs/2109.15000

A surprisal–duration trade-off across and within the world’s languages!
Analysing 600 languages, we find evidence of this trade-off: cross-linguistically; and within 319 of them.
We conclude less surprising phones are produced faster.

#EMNLP2021
arxiv.org/abs/2109.15000
account_circle
Jonathan Berant(@JonathanBerant) 's Twitter Profile Photo

Challenging benchmarks, transformers and their analysis, compositional generalization, robustness, and a lot more cool work from TAU-NLP presented at this week, check it out (click the image to see all papers...)! We have a strong in-person presence so come say hi...

Challenging benchmarks, transformers and their analysis, compositional generalization, robustness, and a lot more cool work from TAU-NLP presented at #emnlp2021 this week, check it out (click the image to see all papers...)! We have a strong in-person presence so come say hi...
account_circle