Bartley Richardson(@BartleyR) 's Twitter Profileg
Bartley Richardson

@BartleyR

Director of #cybersecurity engineering @NVIDIA | Lead Cyber AI, #Morpheus, #CLX | Cyber+ML/DL researcher | Engineering the future of cyber | views == mine

ID:28465291

linkhttps://linktr.ee/bartleyr calendar_today03-04-2009 01:01:27

1,5K Tweets

603 Followers

235 Following

Chau Dang(@DangTechNickel) 's Twitter Profile Photo

Join us this week at for the Theater: Supercharge Software Delivery With RAG [GENAI63714] Learn how to use generative AI and event-driven RAG to help analysts address enterprise software security issues in minutes versus days.... bit.ly/49ZDPlm

account_circle
Jigar Halani(@JigarHalani3) 's Twitter Profile Photo

Join NVIDIA’s AI expert, Bartley Richardson, at to explore the power of in security. This technology detects cyber threats faster and improves models with synthetic training data. Register now. nvda.ws/493nuv4

Join NVIDIA’s #cybersecurity AI expert, @BartleyR, at #GTC24 to explore the power of #generativeAI in security. This technology detects cyber threats faster and improves models with synthetic training data. Register now. nvda.ws/493nuv4
account_circle
David Williams(@d_comfe) 's Twitter Profile Photo

Been experimenting with custom AI agents for back-office automation. Saw a fantastic example by NVIDIA cyber team led by Bartley Richardson. Game change for cybersecurity industry. First-movers win again.

account_circle
DARPA(@DARPA) 's Twitter Profile Photo

30 years ago, the PDF was introduced to the world – and it's become the most widely used document format on the web. Our program has developed tools and methodologies to help reduce vulnerabilities in file formats like PDF and beyond. More: darpa.mil/news-events/20…

30 years ago, the PDF was introduced to the world – and it's become the most widely used document format on the web. Our #SafeDocs program has developed tools and methodologies to help reduce vulnerabilities in file formats like PDF and beyond. More: darpa.mil/news-events/20…
account_circle
François Chollet(@fchollet) 's Twitter Profile Photo

The first 'AI is coming for you job' mass panic I experienced first-hand was in 2014 and was based on the same premise -- that human doctors would soon be a thing of the past. It was triggered by the now-defunct IBM Watson.

account_circle
Thomas G. Dietterich(@tdietterich) 's Twitter Profile Photo

Daniel Vassallo The key difference is that a database query will return the empty set if there are no matching records. The LLM will generate plausible tuples instead. It’s a probabilistic model of a database rather than a database itself.

account_circle
Paul Graham(@paulg) 's Twitter Profile Photo

Elon Musk Interesting point, but an example might make it clearer. Can you think of a prominent person who's currently wasting his talents in software when he could be working on manufacturing and heavy industries?

account_circle
Bartley Richardson(@BartleyR) 's Twitter Profile Photo

Except, it didn’t. 🙄 Perhaps the real threat isn’t AI but those that sensationalize and over promise?

gesikowski.medium.com/gpt-4-tried-to…

account_circle
Rich Harang(@rharang) 's Twitter Profile Photo

We're still in a world where ransomware causes major data/financial loss every few weeks (that we hear about).

These YOLO approaches to slapping together LLMs with external compute -- and the bad practices that are becoming standards -- are going to haunt us indefinitely.

account_circle
Clément(@clmt) 's Twitter Profile Photo

One of the more balanced takes on LLMs, and a useful mental model to think about their capabilities.

account_circle