A year ago we introduced EgoMimic. Now I'm excited to share a major update: Egocentric Human Data for Mobile Manipulation
Robot teleop is costly. Scaling mobile manip teleop is borderline impossible. Can we learn mobile manipulation from human data? The answer is Yes.
Our…
A year ago we introduced EgoMimic. Now I'm excited to share a major update: Egocentric Human Data for Mobile Manipulation
Robot teleop is costly. Scaling mobile manip teleop is borderline impossible. Can we learn mobile manipulation from human data? The answer is Yes.
Our…
How do we unlock the full dexterity of robot hands with data, even beyond what teleoperation can achieve?
DEXOP captures natural human manipulation with full-hand tactile & proprio sensing, plus direct force feedback to users, without needing a robot👉dex-op.github.io
🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.
How do we learn motor skills directly in the real world? Think about learning to ride a bike—parents might be there to give you hands-on guidance.🚲
Can we apply this same idea to robots?
Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.
@svlevine Good article. I have three comments:
1. With any hard optimization problem, if you can get into the right ballpark, you save a lot of time searching around. I think that's where human demonstration really helps.
2. When a human watches Roger Federer, they get the gist of what…
We’ve all seen humanoid robots doing backflips and dance routines for years.
But if you ask them to climb a few stairs in the real world, they stumble!
We took our robot on a walk around town to environments that it hadn’t seen before. Here’s how it works🧵⬇️
How can we leverage diverse human videos to improve robot manipulation?
Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling…
How should an RL agent leverage expert data to improve sample efficiency?
Imitation losses can overly constrain an RL policy.
In RL via Implicit Imitation Guidance, we show how to use expert data to guide more efficient *exploration*, avoiding pitfalls of imitation-augmented RL
If you have a policy that uses diffusion/flow (e.g. diffusion VLA), you can run RL where the actor chooses the noise, which is then denoised by the policy to produce an action. This method, which we call diffusion steering (DSRL), leads to a remarkably efficient RL method! 🧵👇
What would a World Model look like if we start from a real embodied agent acting in the real world?
It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities.
Or in short:
It has to…
🤖 Can a humanoid robot hold extreme single-leg poses like Bruce Lee's Kick or the Swallow Balance? 🤸
💥 YES. Meet HuB: Learning Extreme Humanoid Balance
🔗 Project website: hub-robot.github.io
Command humanoids *directly* with natural language? Introducing LangWBC, a generative, end-to-end policy that turns natural language into real-world whole-body humanoid control! 💬→🦿Smooth, robust, surprisingly intuitive! See more 👉 LangWBC.github.io#RSS2025
Can we prompt robots, just like we prompt language models?
With hierarchy of VLA models + LLM-generated data, robots can:
- reason through long-horizon tasks
- respond to variety of prompts
- handle situated corrections
Blog post & paper: pi.website/research/hirob…
Time to democratize humanoid robots!
Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research.
Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session.
toddlerbot.github.io
1K Followers 1K FollowingPh.D. @CarnegieMellon. Working on agentic foundation model systems. Founder of the FM-Wild workshop series and the ASAP seminar series. They/Them
11K Followers 3K FollowingThe Paul G. Allen School of Computer Science & Engineering educates tomorrow's innovators while developing solutions to humanity's greatest challenges.
3K Followers 6K FollowingBiomedical Engineering researcher turned Systems Designer, Machine learning, ai +Robotics+systems +design ,cryptography etc. Felt in love with ML
760 Followers 666 FollowingDoctoral Student with @crl_ethz @ETH, working on RL, robotics, and more | Previously @MPI_IS @Tsinghua_Uni - Cells interlinked within cells, interlinked.
164 Followers 128 FollowingHi! I am a third-year ungraduate at @sjtu1896. I am interested in Robot Learning and Embodied AI. Website: https://t.co/S5lYcsru7n
529 Followers 172 FollowingPhD Student Visual Computing Group @TU_Muenchen |
3D Generative AI / 3D Reconstruction / Neural Rendering / Style Transfer / NeRF & CUDA
606 Followers 88 FollowingCS PhD at @Tsinghua_Uni, focusing on building large-scale robotic datasets and training large models for generalizable robotic manipulation.
3K Followers 750 FollowingSenior Director, Simulation Technology @NVIDIA. Physics simulation, robotics, GPUs. Currently working on Newton and NVIDIA Warp https://t.co/8LwMOr3sST.
29K Followers 431 FollowingProfessor, CS, U. British Columbia. CIFAR AI Chair, Vector Institute. Sr. Advisor, DeepMind | ML, AI, deep RL, deep learning, AI-Generating Algorithms (AI-GAs)
5K Followers 552 FollowingAssistant Professor @Harvard SEAS @hseas, Lead the Harvard Computational Robotics Lab. #Robotics, #Optimization, #Control, #Vision, #Learning
24K Followers 251 Following@Cohere's research lab and open science initiative that seeks to solve complex machine learning problems. Join us in exploring the unknown, together.
223 Followers 114 FollowingMaster student at @Tsinghua_IIIS with Prof. @gao_young, working on EmbodiedAI and 3DV 🐣. I welcome exploration, creation and excitement🦄.
707 Followers 529 FollowingStaff Research Scientist @ ByteDance Seed, working on robot foundation models. Prev: Research @Dyson | PhD @NUSingapore. All views are my own.
233 Followers 2K FollowingEE @SJTU1896 | Previous Intern@CMU_Robotics; @UCSD. Learn to learn representations and learn to search within and beyond. 🤖🧠👁️
44 Followers 291 FollowingUndergraduate researcher at Tsinghua University, focusing on World Models and Robotics. Current intern @StanfordSVL. My site: https://t.co/R9bicUDDdk
566 Followers 50 FollowingPhD in Computer Science candidate @Stanford. My interests span learning for robotic planning, control, vision systems, and their interfacing representations.
957 Followers 346 FollowingAssistant Professor, Seoul National University (SNU)
Previously,
Research Scientist, Facebook AI Research
PhD student RI CMU
https://t.co/vVaNVzsJCI
72K Followers 3K FollowingThe world's largest professional organization advancing #computing as a science and profession. Also @mastodon.acm.org
Likes & shares ≠endorsement
43K Followers 2K FollowingOfficial Twitter for @TheOfficialACM's Special Interest Group on Computer Graphics & Interactive Techniques + its conferences. #SIGGRAPH2025 #SIGGRAPHAsia2025
921 Followers 584 FollowingAI Research Scientist at Meta Reality Labs (in Zurich) | PhD at UC Berkeley | MIT EECS BS '20 & MEng '21 | CV for AR/VR & robotics | https://t.co/YhPzCHLcqi