Jacob Steinhardt(@JacobSteinhardt) 's Twitter Profileg
Jacob Steinhardt

@JacobSteinhardt

Assistant Professor of Statistics, UC Berkeley

ID:438570403

calendar_today16-12-2011 19:04:34

323 Tweets

7,1K Followers

67 Following

David Bau(@davidbau) 's Twitter Profile Photo

I am delighted to officially announce the National Deep Inference Fabric project, .

ndif.us

NDIF is an U.S. National Science Foundation-supported computational infrastructure project to help YOU advance the science of large-scale AI.

I am delighted to officially announce the National Deep Inference Fabric project, #NDIF. ndif.us NDIF is an @NSF-supported computational infrastructure project to help YOU advance the science of large-scale AI.
account_circle
Pravesh K. Kothari(@praveshkkothari) 's Twitter Profile Photo

In a new preprint with Jarek Blasiok, Rares Buhai, David Steurer, we show a surprisingly simple greedy algorithm that can list decode planted cliques in the semirandom model at k~sqrt n log^2 n --essentially optimal up to log^2 n. This ~resolves Jacob Steinhardt's open question.

account_circle
Yuhui Zhang(@Zhang_Yu_hui) 's Twitter Profile Photo

Super excited to share that VisDiff has been accepted to and selected as an oral (90/11,532)! We will give a 15-min presentation going through the methods and exciting applications enabled by VisDiff. See you in Seattle!

account_circle
Danny Halawi(@dannyhalawi15) 's Twitter Profile Photo

Language models can imitate patterns in prompts. But this can lead them to reproduce inaccurate information if present in the context.

Our work (arxiv.org/abs/2307.09476) shows that when given incorrect demonstrations for classification tasks, models first compute the correct

Language models can imitate patterns in prompts. But this can lead them to reproduce inaccurate information if present in the context. Our work (arxiv.org/abs/2307.09476) shows that when given incorrect demonstrations for classification tasks, models first compute the correct
account_circle
Frances Ding(@FrancesDing) 's Twitter Profile Photo

Protein language models (pLMs) can give protein sequences likelihood scores, which are commonly used as a proxy for fitness in protein engineering. But what do likelihoods encode?

In a new paper (w/ Jacob Steinhardt) we find that pLM likelihoods have a strong species bias!

1/

Protein language models (pLMs) can give protein sequences likelihood scores, which are commonly used as a proxy for fitness in protein engineering. But what do likelihoods encode? In a new paper (w/ @JacobSteinhardt) we find that pLM likelihoods have a strong species bias! 1/
account_circle
Shayne Longpre(@ShayneRedford) 's Twitter Profile Photo

Independent AI research should be valued and protected.

In an open letter signed by over a 100 researchers, journalists, and advocates, we explain how AI companies should support it going forward.

sites.mit.edu/ai-safe-harbor/

1/

Independent AI research should be valued and protected. In an open letter signed by over a 100 researchers, journalists, and advocates, we explain how AI companies should support it going forward. sites.mit.edu/ai-safe-harbor/ 1/
account_circle
Fred Zhang(@FredZhang0) 's Twitter Profile Photo

Beating prediction markets with chatbots sounds cool. In a recent work arxiv.org/abs/2402.18563, we get somewhat close to that.

As another perspective, forecasting is a great capability domain to benchmark LM reasoning, calibration, pre-training knowledge, and more. 🧵1/n

account_circle
Alex Pan(@aypan_17) 's Twitter Profile Photo

Remember when Bing’s LLM Sydney threatened Marvin von Hagen for tweeting about its prompt?

Our paper shows how such unexpected behavior in LLMs emerges from feedback loops and provides recommendations for evaluation to capture feedback effects.

📰: arxiv.org/abs/2402.06627

1/

Remember when @bing’s LLM Sydney threatened @marvinvonhagen for tweeting about its prompt? Our paper shows how such unexpected behavior in LLMs emerges from feedback loops and provides recommendations for evaluation to capture feedback effects. 📰: arxiv.org/abs/2402.06627 1/
account_circle
Yossi Gandelsman(@YGandelsman) 's Twitter Profile Photo

Accepted to oral !

*Interpreting CLIP's Image Representation via Text-Based Decomposition*

CLIP produces image representations that are useful for various downstream tasks. But what information is actually encoded in these representations?

[1/8]

Accepted to oral #ICLR2024! *Interpreting CLIP's Image Representation via Text-Based Decomposition* CLIP produces image representations that are useful for various downstream tasks. But what information is actually encoded in these representations? [1/8]
account_circle