Ryan Steed(@ryanbsteed) 's Twitter Profileg
Ryan Steed

@ryanbsteed

PhD student @HeinzCollege @CarnegieMellon | privacy, fairness, & algorithmic systems • @[email protected]

ID:1191093583018889218

linkhttps://rbsteed.com calendar_today03-11-2019 20:43:51

160 Tweets

394 Followers

422 Following

Kate Kaye on BlueSky at katekaye.bsky.s(@KateKayeReports) 's Twitter Profile Photo

🧵Delving into this substantial paper mapping AI audit tools ecosystem from Ojewale Victor Deb Raji Abeba Birhane Ryan Steed & Briana Vecchione.

I'm thrilled to see World Privacy Forum's complementary report on AI Governance Tools around the world & emerging problems is cited throughout.🙏 😊

account_circle
Deb Raji(@rajiinio) 's Twitter Profile Photo

Already so proud of this team and their policy impact!

Incredible to see our OAT comment cited multiple times in the NTIA 'Artificial Intelligence Accountability Policy Report': ntia.gov/sites/default/…

account_circle
Deb Raji(@rajiinio) 's Twitter Profile Photo

One of the biggest lessons learnt after our study on AI audit tooling (arxiv.org/abs/2402.17861) was how serious a pain point adequate model & data access continues to be.

I signed this letter bc building the tech infra is not enough - we'll need policy interventions as well!

One of the biggest lessons learnt after our study on AI audit tooling (arxiv.org/abs/2402.17861) was how serious a pain point adequate model & data access continues to be. I signed this letter bc building the tech infra is not enough - we'll need policy interventions as well!
account_circle
Deb Raji(@rajiinio) 's Twitter Profile Photo

We spent over a year scavenging for AI audit tools and interviewing audit practitioners about their process.

What we found: the audit process is more complicated than we think, and the tasks we need tooling for extends far beyond just evaluation.

See: tools.auditing-ai.com

We spent over a year scavenging for AI audit tools and interviewing audit practitioners about their process. What we found: the audit process is more complicated than we think, and the tasks we need tooling for extends far beyond just evaluation. See: tools.auditing-ai.com
account_circle
Abeba Birhane(@Abebab) 's Twitter Profile Photo

National Institute of Standards and Technology Big Brother Watch In any case, all this debate on accuracy scores is a DISTRACTION when the technology threatens fundamental issues such as rights to assembly.

Deployment of FRT for policing will alter Irish society for the worst, irreversibly, accurate or not.

18/

account_circle
Abeba Birhane(@Abebab) 's Twitter Profile Photo

Ireland is in the midst of a heated debate on whether to legislate for police use of FRT. The Gardaí (Irish police) are adamant they need FRT at any cost

They are using this National Institute of Standards and Technology report (pages.nist.gov/frvt/html/frvt…) to claim 99% accuracy, which is deceptive & misleading

1/

account_circle
Michael Feffer(@michael_feffer) 's Twitter Profile Photo

New preprint dropped! arxiv.org/abs/2401.15897
In it, Zachary Lipton, Hoda Heidari, Anusha Sinha, and I scrutinize and critique generative AI red-teaming practices as found in-the-wild. 🧵(1/n)

account_circle
Ojewale Victor(@OjewaleV) 's Twitter Profile Photo

A part of the Interesting work we have been doing on the Mozilla Open Source Audit Tooling(OAT) project in trying to understand the current state of AI auditing vis a vis accountability.

Abeba Birhane Ryan Steed Briana Vecchione Deb Raji

account_circle
Deb Raji(@rajiinio) 's Twitter Profile Photo

Happy to see this work on arxiv!

In it, we survey 300+ AI audit papers & dozens of audit reports from various domains (academia, civil society, govt, etc) to taxonomize what exactly is going on in the AI audit space & how their methods relate to impact on actual accountability.

Happy to see this work on arxiv! In it, we survey 300+ AI audit papers & dozens of audit reports from various domains (academia, civil society, govt, etc) to taxonomize what exactly is going on in the AI audit space & how their methods relate to impact on actual accountability.
account_circle
Abeba Birhane(@Abebab) 's Twitter Profile Photo

New paper from Ryan Steed, Ojewale Victor, Briana Vecchione, Deb Raji & I.

'AI auditing: The Broken Bus on the Road to AI Accountability' arxiv.org/abs/2401.14462.

We review & taxonomize the current audit landscape & assess impact and effectiveness.

long 🧵

1/

New paper from @ryanbsteed, @OjewaleV, @brianavecchione, @rajiinio & I. 'AI auditing: The Broken Bus on the Road to AI Accountability' arxiv.org/abs/2401.14462. We review & taxonomize the current audit landscape & assess impact and effectiveness. long 🧵 1/
account_circle
Ryan Steed(@ryanbsteed) 's Twitter Profile Photo

.Abeba Birhane’s work is foundational, cite it!

It’s great that the cesspools that constitute training data are getting more attention (they are perpetually overlooked in “responsible” AI work), but it’s harmful and counterproductive to ignore unmissable studies like theirs

account_circle
Deb Raji(@rajiinio) 's Twitter Profile Photo

This is why we started the Open Source Audit Tooling project Mozilla.

Methods/tools like SHAP/LIME, AI Fairness 360, etc - including those that are academically debunked - are very regularly named in official govt guidelines & documents around the world! Grateful for this work.

account_circle