Micah Goldblum(@micahgoldblum) 's Twitter Profileg
Micah Goldblum

@micahgoldblum

🤖Postdoc at NYU with @ylecun / @andrewgwils. All things machine learning🤖 🚨On the faculty job market this year!🚨

ID:2932062039

linkhttps://goldblum.github.io/ calendar_today19-12-2014 14:37:49

813 Tweets

5,4K Followers

692 Following

Gowthami Somepalli(@gowthami_s) 's Twitter Profile Photo

✨ Can we detect style in the generated images? Our recent work takes a step towards understanding this question. We train a style-focused vision feature extractor built on top of CLIP, which we call a Contrastive Style Descriptior (CSD).

paper: arxiv.org/abs/2404.01292

Style…

✨ Can we detect style in the generated images? Our recent work takes a step towards understanding this question. We train a style-focused vision feature extractor built on top of CLIP, which we call a Contrastive Style Descriptior (CSD). paper: arxiv.org/abs/2404.01292 Style…
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

Measuring Style Similarity in Diffusion Models

Generative models are now widely used by graphic designers and artists. Prior works have shown that these models remember and often replicate content from their training data during generation. Hence as their proliferation

Measuring Style Similarity in Diffusion Models Generative models are now widely used by graphic designers and artists. Prior works have shown that these models remember and often replicate content from their training data during generation. Hence as their proliferation
account_circle
Benjamin Feuer(@FeuerBenjamin) 's Twitter Profile Photo

We're excited to introduce TuneTables, a new deep learning tabular data classification method. Without hyperparameter optimization, TuneTables is comparable to any single optimized gradient boosted method, on datasets with up to 1.9 Mn samples, 22 classes, and 7200 features. 1/6

We're excited to introduce TuneTables, a new deep learning tabular data classification method. Without hyperparameter optimization, TuneTables is comparable to any single optimized gradient boosted method, on datasets with up to 1.9 Mn samples, 22 classes, and 7200 features. 1/6
account_circle