Pelayo Arbués(@pelayoarbues) 's Twitter Profileg
Pelayo Arbués

@pelayoarbues

Ph.D. in Economics. Head of Data Science at @idealista. Passionate about data and film photography.

ID:55193695

linkhttps://www.pelayoarbues.com/ calendar_today09-07-2009 09:51:58

8,8K Tweets

2,2K Followers

1,9K Following

Pelayo Arbués(@pelayoarbues) 's Twitter Profile Photo

Many AI announcements these days. One of the coolest applications is to make the learning experience on YouTube more interactive, being able to ask questions about the video. If it works as expected it might be a great companion for YouTube learners.

account_circle
Fernando de Córdoba(@gamusino) 's Twitter Profile Photo

Años 80. Una gran empresa española genera un proyecto donde perfiles diversos emplean design thinking.

¿Software? ¿IA? No.

La empresa es Renfe y van a reinventar el transporte suburbano.

Hoy, sobre la creación de la marca Cercanías.

Con Foro de Marcas Renombradas Españolas ❤️

Años 80. Una gran empresa española genera un proyecto donde perfiles diversos emplean design thinking. ¿Software? ¿IA? No. La empresa es @Renfe y van a reinventar el transporte suburbano. Hoy, #GamuhiloRenombrado sobre la creación de la marca Cercanías. Con @BrandsofSpain ❤️
account_circle
Manu Romero(@mrm8488) 's Twitter Profile Photo

Colab 📒 Notebook to fine-tune 💅🏽 Google AI vision-language model 👓🔠🧠
on a free T4 VM!
colab.research.google.com/github/google-…

account_circle
Rohan Paul(@rohanpaul_ai) 's Twitter Profile Photo

Nice blog - 'How We Saved 10s of Thousands of Dollars Deploying Low Cost Open Source AI Technologies At Scale with Kubernetes'

opensauced.pizza/blog/how-we-sa…

account_circle
merve(@mervenoyann) 's Twitter Profile Photo

New open Vision Language Model by Google: PaliGemma 💙🤍

📝 Comes in 3B, pretrained, mix and fine-tuned models in 224, 448 and 896 resolution
🧩 Combination of Gemma 2B LLM and SigLIP image encoder
🤗 Supported in Hugging Face transformers
Model capabilities are below ⬇️

account_circle
apolinario (multimodal.art)(@multimodalart) 's Twitter Profile Photo

The first open Stable Diffusion 3-like architecture model is JUST out 💣 - but it is not SD3! 🤔

It is HunyuanDiT by Tencent, a 1.5B parameter DiT (diffusion transformer) text-to-image model 🖼️✨

In the paper they claim to be SOTA open source! I'm working on a Hugging Face demo

The first open Stable Diffusion 3-like architecture model is JUST out 💣 - but it is not SD3! 🤔 It is HunyuanDiT by Tencent, a 1.5B parameter DiT (diffusion transformer) text-to-image model 🖼️✨ In the paper they claim to be SOTA open source! I'm working on a @huggingface demo
account_circle
Hamel Husain(@HamelHusain) 's Twitter Profile Photo

Amazing news re: our LLM fine-tuning course: All students get $1,000 in free compute credits from Modal and Replicate ($500 each) 💰

Course signups close end-of-day today.

You get more compute credits than the course costs 🤯

maven.com/parlance-labs/…

account_circle
Andrew Ng(@AndrewYNg) 's Twitter Profile Photo

New short course: Building Multimodal Search and RAG', by Weaviate • vector database's Sebastian {W}italec ✊🏽✊🏾✊🏿.

Contrastive learning is used to train models to map vectors into an embedding space by pulling similar concepts closer together and pushing dissimilar concepts away from each other. This

account_circle
Pelayo Arbués(@pelayoarbues) 's Twitter Profile Photo

Just added an /ai page to my website, inspired by Derek Sivers. It's a public statement on how I use AI for the notes and writings I publish. pelayoarbues.com/mocs/ai

Just added an /ai page to my website, inspired by @sivers. It's a public statement on how I use AI for the notes and writings I publish. pelayoarbues.com/mocs/ai
account_circle
Xavier Marcet(@XavierMarcet) 's Twitter Profile Photo

LIDERAZGOS SENCILLOS

Liderar es influir más que mandar. Admiro a los líderes que hacen que parezca fácil. Sin sobreactuar. Alcanzando la sencillez.
Hoy, en La Vanguardia

LIDERAZGOS SENCILLOS Liderar es influir más que mandar. Admiro a los líderes que hacen que parezca fácil. Sin sobreactuar. Alcanzando la sencillez. Hoy, en @LaVanguardia
account_circle
vicki(@vboykis) 's Twitter Profile Photo

The most interesting stuff in LLMs right now (to me) is:

+ figuring out how to do it small
+ figuring out how to do it on CPU
+ figuring out how to do it well for specific tasks

account_circle
Nihit Desai(@nihit_desai) 's Twitter Profile Photo

Thrilled to introduce RefuelLLM-2, our latest family of LLMs built for data labeling and enrichment tasks. RefuelLLM-2 (83.82%) outperforms GPT-4-Turbo (80.88%), Claude-3-Opus (79.19%), Llama3-70B (78.2%) and Gemini-1.5-Pro (74.59%) on a benchmark of ~30 data labeling tasks:

Thrilled to introduce RefuelLLM-2, our latest family of LLMs built for data labeling and enrichment tasks. RefuelLLM-2 (83.82%) outperforms GPT-4-Turbo (80.88%), Claude-3-Opus (79.19%), Llama3-70B (78.2%) and Gemini-1.5-Pro (74.59%) on a benchmark of ~30 data labeling tasks:
account_circle
Michael Nielsen(@michael_nielsen) 's Twitter Profile Photo

One rather strange event of the past ten days: like ~600,000 Australians, much of my retirement savings is with a superannuation (think 401-k) firm called Unisuper

Unisuper uses Google Cloud to store their data. Well, ~10 days ago, Google Cloud *accidentally* *deleted* *their*

account_circle
Scott Piper(@0xdabbad00) 's Twitter Profile Photo

Google Cloud accidentally deleted a company's entire cloud environment (Unisuper, an investment company, which manages $80B). The company had backups in another region, but GCP deleted those too. Luckily, they had yet more backups on another provider.
theguardian.com/australia-news…

account_circle
Pelayo Arbués(@pelayoarbues) 's Twitter Profile Photo

'Folks who communicate a no effectively are not the firmest speakers, nor do they make frequent use of the word itself. They are able to convincingly explain their team’s constraints and articulate why the proposed path is either unattainable or undesirable.' by Will Larson

'Folks who communicate a no effectively are not the firmest speakers, nor do they make frequent use of the word itself. They are able to convincingly explain their team’s constraints and articulate why the proposed path is either unattainable or undesirable.' by @Lethain
account_circle
Argilla(@argilla_io) 's Twitter Profile Photo

Over the last few months, we've been sharing a series of blog posts with MantisNLP about preference alignment techniques. If you’ve missed out, it’s not too late! We’ve gathered all the blogs in this thread 🧵 ⬇️

account_circle
Aran Komatsuzaki(@arankomatsuzaki) 's Twitter Profile Photo

CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts

Outperforms SotA multimodal LLMs across various VQA and visual-instruction following benchmarks within each model size group

repo: github.com/SHI-Labs/CuMo
abs: arxiv.org/abs/2405.05949

CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts Outperforms SotA multimodal LLMs across various VQA and visual-instruction following benchmarks within each model size group repo: github.com/SHI-Labs/CuMo abs: arxiv.org/abs/2405.05949
account_circle
Alejandro Vidal(@doblepensador) 's Twitter Profile Photo

¡Mañana nos vemos en el Taller práctico de AI Generativa! 🤖
Aprende sobre chatbots, tokens, atención, transformers, RAG, tools, ciberseguridad y mucho más

Crea tu propio PyBot en Python
🏆 ¡El mejor PyBot se llevará un premio!

🙏 Gracias a BBVA AI Factory y PyDataMadrid
Últimas

account_circle
Andrew Gao(@itsandrewgao) 's Twitter Profile Photo

🔔the guy who invented the LSTM just dropped a new LLM architecture! (Sepp Hochreiter)

Major component is a new parallelizable LSTM.
⚠️one of the major weaknesses of prior LSTMs was the sequential nature (can't be done at once)

Everything we know about the XLSTM: 👇👇🧵

🔔the guy who invented the LSTM just dropped a new LLM architecture! (Sepp Hochreiter) Major component is a new parallelizable LSTM. ⚠️one of the major weaknesses of prior LSTMs was the sequential nature (can't be done at once) Everything we know about the XLSTM: 👇👇🧵
account_circle