Nikolas Adaloglou (black0017)(@nadaloglou) 's Twitter Profileg
Nikolas Adaloglou (black0017)

@nadaloglou

Making AI intuitive at https://t.co/1bgNHk9oRB || Human-centered PhD AI researcher || Book: https://t.co/WYKwPuBepT || AI Course: https://t.co/TH3jpaNqJb

ID:1154385257581436928

linkhttps://github.com/black0017 calendar_today25-07-2019 13:37:47

1,3K Tweets

1,0K Followers

418 Following

Amirmojtaba Sabour(@amsabour) 's Twitter Profile Photo

📢📢 Align Your Steps: Optimizing Sampling Schedules in Diffusion Models

research.nvidia.com/labs/toronto-a…

TL;DR: We introduce a method for obtaining improved sampling schedules for diffusion models, resulting in better samples at the same computation cost.

(1/5)

account_circle
Naval(@naval) 's Twitter Profile Photo

If you want to impact the world, then have good ideas, articulate them well, without ulterior motive, and bring them into reality.

account_circle
Thomas Wolf(@Thom_Wolf) 's Twitter Profile Photo

Llama3 was trained on 15 trillion tokens of public data. But where can you find such datasets and recipes??

Here comes the first release of 🍷Fineweb. A high quality large scale filtered web dataset out-performing all current datasets of its scale. We trained 200+ ablation…

account_circle
Phillip Isola(@phillip_isola) 's Twitter Profile Photo

Our computer vision textbook is released!

Foundations of Computer Vision
with Antonio Torralba and Bill Freeman
mitpress.mit.edu/9780262048972/…

It’s been in the works for >10 years. Covers everything from linear filters and camera optics to diffusion models and radiance fields.

1/4

Our computer vision textbook is released! Foundations of Computer Vision with Antonio Torralba and Bill Freeman mitpress.mit.edu/9780262048972/… It’s been in the works for >10 years. Covers everything from linear filters and camera optics to diffusion models and radiance fields. 1/4
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

Google announces TransformerFAM

Feedback attention is working memory

While Transformers have revolutionized deep learning, their quadratic attention complexity hinders their ability to process infinitely long inputs. We propose Feedback Attention Memory (FAM), a novel

Google announces TransformerFAM Feedback attention is working memory While Transformers have revolutionized deep learning, their quadratic attention complexity hinders their ability to process infinitely long inputs. We propose Feedback Attention Memory (FAM), a novel
account_circle
AI Summer(@theaisummer) 's Twitter Profile Photo

[AI Summer Learning Mondays] - Grokking self-supervised (representation) learning: how it works in computer vision and why | AI Summer hubs.ly/Q026lp3T0 summer

account_circle
Leanpub(@leanpub) 's Twitter Profile Photo

Deep Learning in Production by Sergios Karagiannakos is on sale on Leanpub! Its suggested price is $25.00; get it for $10.50 with this coupon: leanpub.com/sh/4e2jkFOD Sergios Karagiannakos

account_circle
Sergios Karagiannakos(@KarSergios) 's Twitter Profile Photo

The goal of diffusion models is to learn a diffusion process that generates the probability distribution of a given dataset. Once the diffusion process is learned, we can generate new samples that follow the distribution.

theaisummer.com/diffusion-mode…

account_circle
Vikash K Prasad(@VikashS73164257) 's Twitter Profile Photo

The mathematics behind diffusion models. Visualising this would be no less than art.
theaisummer.com/diffusion-mode…

account_circle