MIRI(@MIRIBerkeley) 's Twitter Profileg
MIRI

@MIRIBerkeley

MIRI exists to maximize the probability that the creation of smarter-than-human intelligence has a positive impact.

ID:1568239549

linkhttps://intelligence.org calendar_today04-07-2013 13:32:15

1,2K Tweets

39,2K Followers

99 Following

Follow People
MIRI(@MIRIBerkeley) 's Twitter Profile Photo

Researcher: jobs.ashbyhq.com/miri/c5a85cd2-…

Writer: jobs.ashbyhq.com/miri/44b7a3a1-…

The roles are located in Berkeley, and we are ideally looking to hire people who can start ASAP.

Please share this with your networks or any people you think might be a good fit!

Researcher: jobs.ashbyhq.com/miri/c5a85cd2-… Writer: jobs.ashbyhq.com/miri/44b7a3a1-… The roles are located in Berkeley, and we are ideally looking to hire people who can start ASAP. Please share this with your networks or any people you think might be a good fit!
account_circle
Linch(@LinchZhang) 's Twitter Profile Photo

When Krishna said “I am become Death, the shatterer of worlds,” I believe he was thinking about the effect on jobs.

account_circle
Linch(@LinchZhang) 's Twitter Profile Photo

However, if we’ve had “warning shots” where increasingly larger and more dangerous asteroids land in the intervening years, that will allow society to prepare better environmental and social responses.

account_circle
Linch(@LinchZhang) 's Twitter Profile Photo

We believe in empirical tests and tight feedback loops. Asteroid impact alignment needs to grow alongside asteroid impact capabilities. While we cannot yet consistently target the right continent, we are making steady progress.

account_circle
Connor Leahy(@NPCollapse) 's Twitter Profile Photo

I'm often asked for a quickest possible summary of why ASI is an extinction risk and what to do about it, and this blogpost (link in replies) is the cleanest and most compact and most accurate version of my views written I'm aware of.

Give it a read!

I'm often asked for a quickest possible summary of why ASI is an extinction risk and what to do about it, and this blogpost (link in replies) is the cleanest and most compact and most accurate version of my views written I'm aware of. Give it a read!
account_circle
Eliezer Yudkowsky ⏹️(@ESYudkowsky) 's Twitter Profile Photo

Much of this is false-speaking as always with Perry, but this in particular:

> They say, instead, that we need to carefully develop “alignment” technologies that must be proven to be absolutely perfect in advance of permitting more development — an idea that defies the

account_circle
TIME(@TIME) 's Twitter Profile Photo

Governments and companies hope safety-testing can reduce dangers from AI systems. But the tests are far from ready
time.com/6958868/artifi…

account_circle
Jesse Mu(@jayelmnop) 's Twitter Profile Photo

We’re hiring for the adversarial robustness team Anthropic!

As an Alignment subteam, we're making a big effort on red-teaming, test-time monitoring, and adversarial training. If you’re interested in these areas, let us know! (emails in 🧵)

We’re hiring for the adversarial robustness team @AnthropicAI! As an Alignment subteam, we're making a big effort on red-teaming, test-time monitoring, and adversarial training. If you’re interested in these areas, let us know! (emails in 🧵)
account_circle
Greg ⏹️ Colbourn(@gcolbourn) 's Twitter Profile Photo

Great to see a US Gov commissioned report saying this.

Not pulling any punches in using the word 'default':
'could behave adversarially to human beings by default'

Hope the US government takes heed of the recommendations!

Great to see a US Gov commissioned report saying this. Not pulling any punches in using the word 'default': 'could behave adversarially to human beings by default' Hope the US government takes heed of the recommendations!
account_circle
AI Notkilleveryoneism Memes ⏸️(@AISafetyMemes) 's Twitter Profile Photo

'Godfather of AI' Geoffrey Hinton now thinks there is a 1 in 10 chance everyone will be dead from AI in 5-20 years

Weeks ago, we learned that Yoshua Bengio, another Turing Award winner, thinks there's a 1 in 5 chance we all die.

Hinton is worried about AI hive minds: “Hinton

account_circle
PauseAI ⏸(@PauseAI) 's Twitter Profile Photo

AGI is not inevitable. It requires hordes of engineers with million dollar paychecks. It requires a fully functional and unrestricted supply chain of the most complex hardware. It requires all of us to allow these companies to gamble with our future.

account_circle
Luke Muehlhauser(@lukeprog) 's Twitter Profile Photo

Here is your regular reminder that many of the indisputably top experts in AI think that AI poses a credible risk of literal, no-kidding, full-blown human extinction, and that it should be a top global priority to mitigate that risk.
safe.ai/statement-on-a…

Here is your regular reminder that many of the indisputably top experts in AI think that AI poses a credible risk of literal, no-kidding, full-blown human extinction, and that it should be a top global priority to mitigate that risk. safe.ai/statement-on-a…
account_circle
MIRI(@MIRIBerkeley) 's Twitter Profile Photo

MIRI is now hiring a managing editor and one or more writers. If you're a solid writer who understands AI x-risk and can construct solid, well-written arguments, we'd really like to hear from you. Apply here: jobs.ashbyhq.com/miri/e07416be-…

account_circle
Robert Wiblin(@robertwiblin) 's Twitter Profile Photo

Some claim that human brains can really 'think' or 'understand' — but this illusion is undercut by simply asking humans to remember 10 things (they typically max out at 7), multiply two 3-digit numbers (most cannot, or recall events decades ago (you get plausible confabulations).

account_circle
Eliezer Yudkowsky ⏹️(@ESYudkowsky) 's Twitter Profile Photo

Andrew McKnight Irina Rish marcel blattner Mind Prison Daniel Faggella Abel TM Max Tegmark Quintin Pope Nora Belrose Sensible people with high probabilities of ASI ruin don't obtain them by forecasting particular exotic scenarios; they think they see some end property which results from almost all unpredictable trajectories. Any time you hear somebody telling you about those wacky 'doomers'

account_circle
Eliezer Yudkowsky ⏹️(@ESYudkowsky) 's Twitter Profile Photo

Unfortunate that people will read this and think: 'Ah, this is evidence AI is beneficial, that must mean it is less dangerous!' (Via affect heuristic.)

In reality, if this is a surprise to you at all, it indicates AI is more dangerous, because more powerful; eg it further

account_circle