Aleksander Madry(@aleks_madry) 's Twitter Profileg
Aleksander Madry

@aleks_madry

Head of Preparedness at OpenAI and MIT faculty (on leave). Working on making AI more reliable and safe, as well as on AI having a positive impact on society.

ID:882511862524465152

linkhttps://madrylab.mit.edu/ calendar_today05-07-2017 08:09:59

836 Tweets

31,4K Followers

165 Following

Sadhika Malladi(@SadhikaMalladi) 's Twitter Profile Photo

We are really excited to host Aleksander Madry from OpenAI at the PASS seminar on 3/26, 2pm ET! Submit your questions about the Preparedness team: tinyurl.com/pass-question, and join our mailing list to receive notifications about talks: tinyurl.com/pass-mailing

account_circle
AAAI(@RealAAAI) 's Twitter Profile Photo

We are pleased to announce the 2021 AAAI/ACM SIGAI Dissertation Award Winner.

Congratulations to Shibani Santurkar, Massachusetts Institute of Technology for her work entitled Machine Learning Beyond Accuracy: A Features Perspective On Model Generalization.

And congratulations…

account_circle
Leopold Aschenbrenner(@leopoldasch) 's Twitter Profile Photo

Gary Lupyan OpenAI Hi, just wanted to clarify - these are just supposed to be very basic “don’t sue us if you’re rejected” type terms for the grant application. They’re not at all supposed to be a barrier to applying. If your university office has an issue, please tell them to reach out to…

account_circle
Aleksander Madry(@aleks_madry) 's Twitter Profile Photo

Great news! The US AI Safety Institute is an extremely important effort and looking forward to its thriving under Elizabeth's and Elham's leadership.

account_circle
U.S. Commerce Dept.(@CommerceGov) 's Twitter Profile Photo

: Secretary Gina Raimondo announces key members of the executive leadership team to lead the U.S. AI Safety Institute, which will be established National Institute of Standards and Technology.

Elizabeth Kelly to lead the Institute as Director & Elham Tabassi to serve as Chief Technology Officer. commerce.gov/news/press-rel…

account_circle
Rachel Metz(@rachelmetz) 's Twitter Profile Photo

New from me: OpenAI says GPT-4 poses “at most” a slight risk of helping people create biological threats. This is the first study from the “preparedness” team (and its leader, Aleksander Madry, told me there's more work TK from this team on other topics). bloomberg.com/news/articles/…

account_circle
Tejal Patwardhan(@tejalpatwardhan) 's Twitter Profile Photo

latest from preparedness @ openai: gpt4 at most mildly helps with biothreat creation.

method: get bio PhDs in a secure monitored facility. half try biothreat creation w/ (experimental) unsafe gpt4. other half can only use the internet.

so far, gpt4 ≈ internet… but we’ll…

account_circle
Neil Chowdhury(@ChowdhuryNeil) 's Twitter Profile Photo

Our latest update: quantifying how LLMs impact bioweapon creation. Now part a growing set of frontier model evaluations to track and forecast catastrophic risks from AI!

account_circle
Kevin Liu(@kliu128) 's Twitter Profile Photo

AI's impact on biosecurity has been a major aspect of discussion for catastrophic risks.

To quantify this risk, Preparedness @openai is releasing a new study: bio PhDs, a secure facility, 5 hours for 5 harmful tasks, access to spicy (research-only) GPT-4!

account_circle