Tony Z. Zhao(@tonyzzhao) 's Twitter Profileg
Tony Z. Zhao

@tonyzzhao

CS PhD student @Stanford. Aspiring full-stack roboticist. Prev Deepmind, Tesla, GoogleX, Berkeley.

ID:1074448308578336768

linkhttps://tonyzhaozh.github.io/ calendar_today16-12-2018 23:36:53

303 Tweets

12,4K Followers

785 Following

Toru(@ToruO_O) 's Twitter Profile Photo

Imitation learning works™ – but you need good data 🥹 How to get high-quality visuotactile demos from a bimanual robot with multifingered hands, and learn smooth policies?

Check our new work “Learning Visuotactile Skills with Two Multifingered Hands”! 🙌
toruowo.github.io/hato/

account_circle
Tony Z. Zhao(@tonyzzhao) 's Twitter Profile Photo

The world of hardware is accelerating fast. Attention to the whole system, not just software/AI, will be necessary for real embodied AI.

account_circle
Hugo Barra(@hbarra) 's Twitter Profile Photo

Was incredibly fun coming to the Marques Brownlee studio to record a Waveform podcast episode with Marques and David ImeI — and also got to talk about my latest VR setup 🙃

Was incredibly fun coming to the @MKBHD studio to record a Waveform podcast episode with Marques and @DurvidImel — and also got to talk about my latest VR setup 🙃
account_circle
Chelsea Finn(@chelseabfinn) 's Twitter Profile Photo

Excited to share one of the last projects I worked on at Google --

We picked three of the most dextrous tasks we could think of:
- tying shoelaces 👟
- replacing a robot finger 🤖
- hanging a shirt 👕
and tried to see if we could train a policy to do each.

All of them worked!

account_circle
Ayzaan Wahid(@ayzwah) 's Twitter Profile Photo

For the past year we've been working on ALOHA Unleashed 🌋 @GoogleDeepmind - pushing the scale and dexterity of tasks on our ALOHA 2 fleet. Here is a thread with some of the coolest videos!

The first task is hanging a shirt on a hanger (autonomous 1x)

account_circle
Chengshu Li(@ChengshuEricLi) 's Twitter Profile Photo

We just cut a new offical release for BEHAVIOR-1K, with all 1,000 activities successfully sampled across 50 high-quality, interactive scenes, ready to be tackled by robotics AI and researchers!

Visit our website and try it out today!
behavior.stanford.edu

account_circle
Cheng Chi(@chichengcc) 's Twitter Profile Photo

UMI x ARX
Yihuai Gao just got our in-the-wild cup policy working with ARX5 ARX ! We are still tuning the controller and latency matching for smoother tracking. Lot’s of potential in these low-cost lightweight arms!

account_circle
Tony Z. Zhao(@tonyzzhao) 's Twitter Profile Photo

End-to-end low-level skills with language corrections on the fly. Huge commitment from Lucy Shi to pull this off. Congratulations!

account_circle
Karl Pertsch(@KarlPertsch) 's Twitter Profile Photo

Access to *diverse* training data is a major bottleneck in robot learning. We're releasing DROID, a large-scale in-the-wild manipulation dataset. 76k trajectories, 500+ scenes, multi-view stereo, language annotations etc
Check it out & download today!

💻: droid-dataset.github.io

account_circle
Alexander Khazatsky(@SashaKhazatsky) 's Twitter Profile Photo

After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset”

DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices

account_circle
Tony Z. Zhao(@tonyzzhao) 's Twitter Profile Photo

It is refreshing to see highly creative, open-source works like DexCap building on top of another highly creative, open-source work (LEAP hand by Kenny Shaw.)

This is the best way forward for the community. Congratulations Chen Wang !

account_circle
Shuran Song(@SongShuran) 's Twitter Profile Photo

Check out UMI! 3 things I learned in this project:

1. Wrist-mount cameras can be sufficient for challenging manipulation tasks with the right hardware design.

2. Cross-embodiment policy is possible with the right policy interface.

3. BC can generalize if the data is right.

account_circle
Chenhao Li @ ICLR(@breadli428) 's Twitter Profile Photo

When I revisit the videos of the amazing work by Zipeng Fu, Tony Z. Zhao and Chelsea Finn, l realized how robust robotic behaviors can be recovered from demonstrations. Maybe imitation learning with high quality data is the right way to go?

account_circle
Tony Z. Zhao(@tonyzzhao) 's Twitter Profile Photo

I wish we can come back to this tweet in a decade and be like 'Hey here is when we finally cracked data collection'.

Low-cost, portable, hardware agnostic. I could not ask for more!

account_circle
Hugo Barra(@hbarra) 's Twitter Profile Photo

Impressive progress from the Stanford Robotics Center applying Gen AI (diffusion models) to training robots

This work shows that low-cost, human-trained domestic robots are not too far in our future

More details here: umi-gripper.github.io

account_circle