Chulin Xie(@ChulinXie) 's Twitter Profileg
Chulin Xie

@ChulinXie

CS PhD student at UIUC and student researcher @GoogleAI; Ex research intern @MSFTResearch @NvidiaAI

ID:1109845260874579969

linkhttps://alphapav.github.io/ calendar_today24-03-2019 15:51:44

51 Tweets

635 Followers

661 Following

Zinan Lin(@lin_zinan) 's Twitter Profile Photo

We introduce 𝗣𝗿𝗶𝘃𝗮𝘁𝗲 𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻, a new framework to generate 𝗗𝗣 𝘀𝘆𝗻𝘁𝗵𝗲𝘁𝗶𝗰 𝗱𝗮𝘁𝗮
✅No training needed! Only inference APIs of models
✅Could even beat/match SoTA training-based methods in data quality
✅Works across images, text, etc.

[1/n]

account_circle
Chulin Xie(@ChulinXie) 's Twitter Profile Photo

Excited to see the release of the book 🥳 and grateful for the opportunity to contribute a chapter. Big thanks to the three editors! Pin-Yu Chen Lam M. Nguyen Nghia Hoang

account_circle
Boxin Wang(@wbx_life) 's Twitter Profile Photo

🔥 Excited to release Retro and InstructRetro code and checkpoints, featuring:
- the largest LLM pretrained with retrieval and instruction tuning
- retrieval from trillions of tokens
- end2end reproducible recipe

Code: github.com/NVIDIA/Megatro…
Checkpoints: huggingface.co/collections/nv…

account_circle
Weixin Chen(@chenweixin107) 's Twitter Profile Photo

Can we utilize OOD queries to improve LLMs' truthfulness without relying on any human annotated answer?
Yes! Check out 'Gradual Self-Truthifying for Large Language Models'! arxiv.org/abs/2401.12292
- Adaptively optimize model via DPO on self-generated pairwise truthfulness data.

account_circle
Secure Learning Lab (SLL)(@uiuc_aisecure) 's Twitter Profile Photo

Super excited to set up the LLM safety & trustworthiness leaderboard on Huggingface, and we will keep adding new safety perspectives. Here we evaluate (open & close) LLMs and compressed LLMs. Looking forward to more exciting evaluations to assess and enhance LLM safety!!! 🥳

account_circle
Yangsibo Huang(@YangsiboHuang) 's Twitter Profile Photo

I am at now.

I am also on the academic job market, and humbled to be selected as a 2023 EECS Rising Star✨. I work on ML security, privacy & data transparency.

Appreciate any reposts & happy to chat in person! CV+statements: tinyurl.com/yangsibo

Find me at ⬇️

account_circle
Zhichun Guo(@Zhichun5) 's Twitter Profile Photo

I will attend from Dec. 10th to 17th and can't wait to meet old and new friends there! 🌱🎓I am now on the academic job market for faculty positions. My research focus is graph learning and AI for science(zguo.io). Feel free to reach out🌟

account_circle
Rylan Schaeffer(@RylanSchaeffer) 's Twitter Profile Photo

Excited to announce

🔥🤨DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models 🤨 🔥

Appearing at as Datasets and Benchmarks **Oral**

Paper: openreview.net/forum?id=kaHpo…

Led by Secure Learning Lab (SLL) Boxin Wang Chulin Xie Chenhui Zhang

1/N

Excited to announce 🔥🤨DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models 🤨 🔥 Appearing at #NeurIPS2023 as Datasets and Benchmarks **Oral** Paper: openreview.net/forum?id=kaHpo… Led by @uiuc_aisecure @wbx_life @ChulinXie @danielz2333 1/N
account_circle
Yangsibo Huang(@YangsiboHuang) 's Twitter Profile Photo

Microsoft's recent work (arxiv.org/abs/2310.02238) shows how LLMs can unlearn copyrighted training data via strategic finetuning: They made Llama2 unlearn Harry Potter's magical world.

But our Min-K% Prob (tinyurl.com/mink-prob) found some persistent “magical traces”!🔮

[1/n]

Microsoft's recent work (arxiv.org/abs/2310.02238) shows how LLMs can unlearn copyrighted training data via strategic finetuning: They made Llama2 unlearn Harry Potter's magical world. But our Min-K% Prob (tinyurl.com/mink-prob) found some persistent “magical traces”!🔮 [1/n]
account_circle
Microsoft Research(@MSFTResearch) 's Twitter Profile Photo

New research found previously undisclosed trust-related strengths and vulnerabilities in LLMs. Researchers shared these learnings with Microsoft product groups, confirming the potential threats identified do not impact current customer-facing services: msft.it/60169tD0E

New research found previously undisclosed trust-related strengths and vulnerabilities in LLMs. Researchers shared these learnings with Microsoft product groups, confirming the potential threats identified do not impact current customer-facing services: msft.it/60169tD0E
account_circle