zhiyang xu(@zhiyangx11) 's Twitter Profile Photo

We introduce the first multimodal instruction tuning dataset: 🌟MultiInstruct🌟 in our 🚀 🚀 paper. MultiInstruct consists of 62 diverse multimodal tasks and each task is equipped with 5 expert-written instructions.
🚩arxiv.org/abs/2212.10773🧵[1/3]

We introduce the first multimodal instruction tuning dataset: 🌟MultiInstruct🌟 in our 🚀#ACL2023NLP🚀 paper. MultiInstruct consists of 62 diverse multimodal tasks and each task is equipped with 5 expert-written instructions.
🚩arxiv.org/abs/2212.10773🧵[1/3]
account_circle
Moritz Plenz(@MoritzPlenz) 's Twitter Profile Photo

Happy to share my first paper “Similarity-weighted Construction of Contextualized Commonsense Knowledge Graphs for Knowledge-intense Argumentation Tasks”, accepted at 🥳

📜 arxiv.org/abs/2305.08495
🎥 youtube.com/watch?v=aA5kPg…

1/n

Happy to share my first paper “Similarity-weighted Construction of Contextualized Commonsense Knowledge Graphs for Knowledge-intense Argumentation Tasks”, accepted at #ACL2023NLP 🥳

📜 arxiv.org/abs/2305.08495
🎥 youtube.com/watch?v=aA5kPg…

1/n
account_circle
Afra Amini(@afra_amini) 's Twitter Profile Photo

Are you a big fan of structure?

Have you ever wanted to apply the latest and greatest large language model out-of-the-box to parsing?

Are you a secret connoisseur of linear-time dynamic programs?

If you answered yes, our outstanding paper may be just right for you!

Are you a big fan of structure?

Have you ever wanted to apply the latest and greatest large language model out-of-the-box to parsing?

Are you a secret connoisseur of linear-time dynamic programs?

If you answered yes, our outstanding #ACL2023NLP paper may be just right for you!
account_circle
conan1024hao(@810396815) 's Twitter Profile Photo

主著論文がFindings of ACL 2023に採択されました!
“Kanbun-LM: Reading and Translating Classical Chinese in Japanese Methods by Language Models”
漢文の自動返り点付与と書き下し文生成に関する研究です。マイナーな分野ですが、漢文教育の発展に繋げられると期待しています!

主著論文がFindings of ACL 2023に採択されました!
“Kanbun-LM: Reading and Translating Classical Chinese in Japanese Methods by Language Models”
漢文の自動返り点付与と書き下し文生成に関する研究です。マイナーな分野ですが、漢文教育の発展に繋げられると期待しています!

#ACL2023NLP
account_circle
Fanny Jourdan(@Fannyjrd_) 's Twitter Profile Photo

I'm glad to share that our paper 'COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP' (arxiv.org/abs/2305.06754) was accepted at Findings of ! ❤️🦜

NLP 1/6🧵

I'm glad to share that our paper 'COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP' (arxiv.org/abs/2305.06754) was accepted at Findings of #ACL2023 ! ❤️🦜

#ACL2023NLP #NLProc #XAI 1/6🧵
account_circle
Zeming Chen(@eric_zemingchen) 's Twitter Profile Photo

📢New paper to appear at : arxiv.org/abs/2212.10534

Human-quality counterfactual data with no humans! Introduce DISCO, our novel distillation framework that automatically generates high-quality, diverse, and useful counterfactual data at scale using LLMs.

📢New paper to appear at #acl2023nlp: arxiv.org/abs/2212.10534

Human-quality counterfactual data with no humans! Introduce DISCO, our novel distillation framework that automatically generates high-quality, diverse, and useful counterfactual data at scale using LLMs.
account_circle
Rosa Zhou(@qiaoyu_rosa) 's Twitter Profile Photo

🔥Paper Alert!!

How can we effectively learn from natural language explanations while leveraging LLMs?

Read our paper: 🔥FLamE: Few-shot Learning from Natural Language Explanations

📄: arxiv.org/abs/2306.08042
📽️: youtu.be/rnSIFCeDq_Y

Details in 🧵(1/n)

🔥Paper Alert!! #NLProc

How can we effectively learn from natural language explanations while leveraging LLMs?

Read our #ACL2023NLP paper: 🔥FLamE: Few-shot Learning from Natural Language Explanations

📄: arxiv.org/abs/2306.08042
📽️: youtu.be/rnSIFCeDq_Y

Details in 🧵(1/n)
account_circle
Yuval Reif(@YuvalReif) 's Twitter Profile Photo

Is dataset debiasing the right path to robust models?

In our work, “Fighting Bias with Bias”, we argue that in order to promote model robustness, we should in fact amplify biases in training sets.

w/ Roy Schwartz
In Findings
Paper: arxiv.org/abs/2305.18917
🧵👇

Is dataset debiasing the right path to robust models?

In our work, “Fighting Bias with Bias”, we argue that in order to promote model robustness, we should in fact amplify biases in training sets.

w/ @royschwartzNLP
In #ACL2023NLP Findings
Paper: arxiv.org/abs/2305.18917
🧵👇
account_circle
Genta Winata(@gentaiscool) 's Twitter Profile Photo

Does an LLM forget when it learns a new language?

We systematically study catastrophic forgetting in a massively multilingual continual learning framework in 51 languages.

Preprint: arxiv.org/abs/2305.16252
⬇️🧵
The paper was accepted at findings [1/4]

Does an LLM forget when it learns a new language?

We systematically study catastrophic forgetting in a massively multilingual continual learning framework in 51 languages.

Preprint: arxiv.org/abs/2305.16252
⬇️🧵
The paper was accepted at #acl2023nlp findings #NLProc [1/4]
account_circle
Sean MacAvaney(@macavaney) 's Twitter Profile Photo

If you're at nlp, drop by our poster today at 11!

Effective Contrastive Weighting for Dense Query Expansion

📄 aclanthology.org/2023.acl-long.…

We explore strategies for learning the best vectors to add to your query, improving retrieval for ColBERT-style models.

If you're at #acl2023 #acl2023nlp, drop by our poster today at 11!

Effective Contrastive Weighting for Dense Query Expansion

📄 aclanthology.org/2023.acl-long.…

We explore strategies for learning the best vectors to add to your query, improving retrieval for ColBERT-style models.
account_circle
Mehran Kazemi(@kazemi_sm) 's Twitter Profile Photo

Paper Alert
Large Language Models ( ) still struggle at multi-hop deductive reasoning. We propose LAMBADA, an approach that achieves a massive performance boost by combining LLMs with the classical backward chaining algorithm.
arxiv.org/pdf/2212.13894…

#ACL2023NLP Paper Alert
Large Language Models (#LLMs) still struggle at multi-hop deductive reasoning. We propose LAMBADA, an approach that achieves a massive performance boost by combining LLMs with the classical backward chaining algorithm.
arxiv.org/pdf/2212.13894…
account_circle
Yuka Ko(@keiouok) 's Twitter Profile Photo

IWSLT2023でBest Student Paper Awardを頂きました。
賞とは無縁の人生と思っていたのと、夜明け前の眠気目でClosing sessionを聞いていたのでまだ実感が湧いていません。
共著者や研究室の皆さんと一緒に取らせていただいた賞だと思います。本当にありがとうございました!

IWSLT2023でBest Student Paper Awardを頂きました。
賞とは無縁の人生と思っていたのと、夜明け前の眠気目でClosing sessionを聞いていたのでまだ実感が湧いていません。
共著者や研究室の皆さんと一緒に取らせていただいた賞だと思います。本当にありがとうございました!
#IWSLT #ACL2023NLP
account_circle
Brihi Joshi(@BrihiJ) 's Twitter Profile Photo

Super excited to share our paper! 🙌🏽

📢 Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

📑: arxiv.org/abs/2305.07095

🧵👇 [1/n]

Super excited to share our #ACL2023NLP paper! 🙌🏽

📢 Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

📑: arxiv.org/abs/2305.07095

🧵👇 [1/n]

#NLProc #XAI
account_circle
John Wieting(@johnwieting2) 's Twitter Profile Photo

Want to try a new multilingual sentence retriever, free of the quirks and demands of contrastive learning? Read about VMSST, our new variational source-separation method. Improves quality and trains with small batches!

To appear at .

Arxiv: arxiv.org/abs/2212.10726

Want to try a new multilingual sentence retriever, free of the quirks and demands of contrastive learning? Read about VMSST, our new variational source-separation method. Improves quality and trains with small batches!

To appear at #ACL2023NLP.

Arxiv: arxiv.org/abs/2212.10726
account_circle
Sina Ahmadi(@sina_ahm) 's Twitter Profile Photo

I received an email earlier today saying that my visa application to attend in Canada is approved.

The conference was held five months ago in July 2023! 😑

I received an email earlier today saying that my visa application to attend #ACL2023NLP in Canada is approved. 

The conference was held five months ago in July 2023! 😑
#NLProc #AcademicTwitter
account_circle
Walid Magdy 🇵🇸(@Walid_Magdy) 's Twitter Profile Photo

Thanks ACL 2024 for hosting the conference in Canada while ignoring all nationalities who won't be able to participate due to the strict visa procedure!

Given this, kindly stop claiming too much about the importance of diversity, equity & inclusion!


NLP

Thanks @aclmeeting for hosting the conference in Canada while ignoring all nationalities who won't be able to participate due to the strict visa procedure!

Given this, kindly stop claiming too much about the importance of diversity, equity & inclusion!

#ACL2023 
#ACL2023NLP
account_circle
Prasann Singhal(@prasann_singhal) 's Twitter Profile Photo

New paper!

Reranking generation sets with transformer-based metrics can be slow. What if we could rerank everything at once? We propose EEL: Efficient Encoding of Lattices for fast reranking!

Paper: arxiv.org/abs/2306.00947 w/ Jiacheng Xu Xi Ye Greg Durrett

New #ACL2023NLP paper!

Reranking generation sets with transformer-based metrics can be slow. What if we could rerank everything at once? We propose EEL: Efficient Encoding of Lattices for fast reranking!

Paper: arxiv.org/abs/2306.00947 w/ @JiachengNLP @xiye_nlp @gregd_nlp
account_circle
Ben Tang(@bennyjtang) 's Twitter Profile Photo

Chart captioning is hard, both for humans & AI.

Today, we’re introducing VisText: a benchmark dataset of 12k+ visually-diverse charts w/ rich captions for automatic captioning (w/ Angie Boggust Arvind Satyanarayan)

📄: vis.csail.mit.edu/pubs/vistext.p…
💻: github.com/mitvis/vistext

Chart captioning is hard, both for humans & AI.

Today, we’re introducing VisText: a benchmark dataset of 12k+ visually-diverse charts w/ rich captions for automatic captioning (w/ @angie_boggust @arvindsatya1)

📄: vis.csail.mit.edu/pubs/vistext.p…
💻: github.com/mitvis/vistext

#ACL2023NLP
account_circle