UCSB NLP Group(@ucsbNLP) 's Twitter Profileg
UCSB NLP Group

@ucsbNLP

The NLP Group @ University of California, Santa Barbara. Profs. @WilliamWangNLP, Xifeng Yan, Simon Todd, @CodeTerminator, @lileics; acct run by @m2saxon

ID:1417329311506264066

linkhttp://nlp.cs.ucsb.edu/ calendar_today20-07-2021 03:44:56

179 Tweets

1,4K Followers

735 Following

Antonis Antoniades(@anton_iades) 's Twitter Profile Photo

Tomorrow (10:45am GMT +2) I am presenting Neuroformer at ICLR (#68). Stop by to hear how we trained a multimodal GPT on data from a mouse playing a VR game! 🧠🤖

Tomorrow (10:45am GMT +2) I am presenting Neuroformer at ICLR (#68). Stop by to hear how we trained a multimodal GPT on data from a mouse playing a VR game! 🧠🤖
account_circle
Alon Albalak(@AlbalakAlon) 's Twitter Profile Photo

With all of the excitement of the past few months, it's time for a career update: 🎉I graduated with my PhD from UCSB NLP Group UC Santa Barbara 🥳and joined SynthLabs 🎊to drive open-science collaborations and push the boundaries on data strategies for synthetic data

👇I'm at !

account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Just had an amazing visit to the legendary Alice Oh at KAIST! Blown away by the brilliant students and world-class faculty. It truly is the MIT of Korea! 🌟🎓

Just had an amazing visit to the legendary @aliceoh at KAIST! Blown away by the brilliant students and world-class faculty. It truly is the MIT of Korea! 🌟🎓 #KAIST
account_circle
Michael Saxon(@m2saxon) 's Twitter Profile Photo

🚨We've been assessing T2I metrics wrong...until now‼️

Our new meta-metric for T2I faithfulness metrics, T2IScoreScore (TS2), checks if a metric correctly orders and separates many images against single prompts!

The results may surprise you...

t2iscorescore.github.io

1/5

🚨We've been assessing T2I metrics wrong...until now‼️ Our new meta-metric for T2I faithfulness metrics, T2IScoreScore (TS2), checks if a metric correctly orders and separates many images against single prompts! The results may surprise you... t2iscorescore.github.io 1/5
account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Happy to announce my upcoming South Korea 🇰🇷 tour next week 🤩

KAIST - Monday 4/15, 2:30pm E3-1, Rm 4443. Host: Alice Oh
SKKU - Suwon, Tuesday 4/16, TBD, Engineering Hall 2. Host: JinYeong Bak
SNU - Wednesday 4/17, 1pm see below. Host: Jay-Yoon Lee

I hope to meet new + old friends!

Happy to announce my upcoming South Korea 🇰🇷 tour next week 🤩 KAIST - Monday 4/15, 2:30pm E3-1, Rm 4443. Host: @aliceoh SKKU - Suwon, Tuesday 4/16, TBD, Engineering Hall 2. Host: @NoSyu SNU - Wednesday 4/17, 1pm see below. Host: Jay-Yoon Lee I hope to meet new + old friends!
account_circle
Wenda Xu(@WendaXu2) 's Twitter Profile Photo

When LLMs make mistakes, can we build a model to pinpoint error, indicate its severity and error type? Can we incorporate this fine-grained info to improve LLM? We introduce LLMRefine [NAACL 2024], a simulated annealing method to revise LLM output at inference. Google AI UCSB NLP Group

When LLMs make mistakes, can we build a model to pinpoint error, indicate its severity and error type? Can we incorporate this fine-grained info to improve LLM? We introduce LLMRefine [NAACL 2024], a simulated annealing method to revise LLM output at inference. @GoogleAI @ucsbNLP
account_circle
William Wang(@WilliamWangNLP) 's Twitter Profile Photo

Huge congratulations to Alon Albalak for defending his PhD thesis “Understanding and Improving Models Through a Data-Centric Lens”. It’s refreshing to witness Alon’s growth, innovation, and leadership in the last few years. Alon is my 8th PhD graduate and I wish him all the best!

Huge congratulations to @AlbalakAlon for defending his PhD thesis “Understanding and Improving Models Through a Data-Centric Lens”. It’s refreshing to witness Alon’s growth, innovation, and leadership in the last few years. Alon is my 8th PhD graduate and I wish him all the best!
account_circle
Alon Albalak(@AlbalakAlon) 's Twitter Profile Photo

🤩 I'm honored that insights from our data selection survey are being shared across the globe 🌍

Fantastic slides Thomas Wolf !

account_circle
Niloofar (Fatemeh) @ICLR 🇦🇹(@niloofar_mire) 's Twitter Profile Photo

I'll be uclanlp tmw & UCSB NLP Group the next day to talk about how the 'emergent' capabilities of LLMs create emergent inference-time privacy risks, and how membership inference attacks can be inconclusive in current setups! Hit me up if you wanna chat!

tinyurl.com/mia-cnfide

I'll be @uclanlp tmw & @ucsbNLP the next day to talk about how the 'emergent' capabilities of LLMs create emergent inference-time privacy risks, and how membership inference attacks can be inconclusive in current setups! Hit me up if you wanna chat! tinyurl.com/mia-cnfide
account_circle
Kexun Zhang(@kexun_zhang) 's Twitter Profile Photo

🚀Fire linguists in the LLM era? No! Excited to share LingoLLM, a novel method for processing endangered languages with linguistic resources. LingoLLM improves the translation of many endangered languages from 0 to 10.5 BLEU! It helps other tasks as well!
arxiv.org/abs/2402.18025

🚀Fire linguists in the LLM era? No! Excited to share LingoLLM, a novel method for processing endangered languages with linguistic resources. LingoLLM improves the translation of many endangered languages from 0 to 10.5 BLEU! It helps other tasks as well! arxiv.org/abs/2402.18025
account_circle
Alon Albalak(@AlbalakAlon) 's Twitter Profile Photo

{UCSB|AI2|UW|Stanford|MIT|UofT|Vector|Contextual AI} present a survey on🔎Data Selection for LLMs🔍

Training data is a closely guarded secret in industry🤫with this work we narrow the knowledge gap, advocating for open, responsible, collaborative progress
arxiv.org/abs/2402.16827

{UCSB|AI2|UW|Stanford|MIT|UofT|Vector|Contextual AI} present a survey on🔎Data Selection for LLMs🔍 Training data is a closely guarded secret in industry🤫with this work we narrow the knowledge gap, advocating for open, responsible, collaborative progress arxiv.org/abs/2402.16827
account_circle
Wenda Xu(@WendaXu2) 's Twitter Profile Photo

[New paper!] Can LLMs truly evaluate their own output? Can self-refine/self-reward improve LLMs? Our study reveals that LLMs exhibit biases towards their output. This self-bias gets amplified during self-refine/self-reward, leading to a negative impact on performance. UCSB NLP Group

[New paper!] Can LLMs truly evaluate their own output? Can self-refine/self-reward improve LLMs? Our study reveals that LLMs exhibit biases towards their output. This self-bias gets amplified during self-refine/self-reward, leading to a negative impact on performance. @ucsbNLP
account_circle
UCSB NLP Group(@ucsbNLP) 's Twitter Profile Photo

Thank you for joining us yesterday Yanai Elazar @ICLR! We are very excited by the 'What's In My Big Data' direction! wimbd.apps.allenai.org

Thank you for joining us yesterday @yanaiela! We are very excited by the 'What's In My Big Data' direction! wimbd.apps.allenai.org
account_circle
Xinyi Wang(@XinyiWang98) 's Twitter Profile Photo

Happy to share our new preprint on understanding how reasoning emerges from language model pre-training: arxiv.org/abs/2402.03268
We hypothesize that language models can aggregate reasoning paths seen in pre-training data to draw new conclusions at inference time.

Happy to share our new preprint on understanding how reasoning emerges from language model pre-training: arxiv.org/abs/2402.03268 We hypothesize that language models can aggregate reasoning paths seen in pre-training data to draw new conclusions at inference time.
account_circle
UCSB NLP Group(@ucsbNLP) 's Twitter Profile Photo

Congrats to our very own Matthew Ho (Matt Ho) for winning a Computing Research Association Outstanding Undergraduate Research Honorable Mention!
cra.org/crn/2024/01/ou…

account_circle
Antonis Antoniades(@anton_iades) 's Twitter Profile Photo

💻 LMs leveraging generative pretraining learn many diverse skills.

🧠 But what can GPTs trained on brain data learn to do?

Introducing Neuroformer. A generative model pretrained on massively multimodal, multitask neuronal data! (ICLR 2024) 🧵

💻 LMs leveraging generative pretraining learn many diverse skills. 🧠 But what can GPTs trained on brain data learn to do? Introducing Neuroformer. A generative model pretrained on massively multimodal, multitask neuronal data! (ICLR 2024) 🧵
account_circle