Assistant Professor @mldcmu. Formerly: Postdoc @MITEECS, PhD @Berkeley_EECS, Math Undergrad @Princeton. New to Twitter. https://t.co/67bMOAyqK6Joined May 2024
👏👏This is pretty massive!! Generative modeling looks clean in math, but getting it up and running can require a fair bit of alchemy. 🧪🧪
Thankfully, Nick Boffi (co-creator of Stochastic Interpolants) just dropped a super-clean, super-fast, super-reproducible repo for core…
👏👏This is pretty massive!! Generative modeling looks clean in math, but getting it up and running can require a fair bit of alchemy. 🧪🧪
Thankfully, Nick Boffi (co-creator of Stochastic Interpolants) just dropped a super-clean, super-fast, super-reproducible repo for core…
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/
One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…
Very cool! In addition to optimizing inference-time search as a learning desideratum, this really speaks to power of building reward models purely from expert trajectories, via discriminative objectives. Excited to see how far this can go!
Very cool! In addition to optimizing inference-time search as a learning desideratum, this really speaks to power of building reward models purely from expert trajectories, via discriminative objectives. Excited to see how far this can go!
I am giving a talk "From Sim2Real 1.0 to 4.0 for Humanoid Whole-Body Control and Loco-Manipulation" at the RoboLetics 2.0 workshop @ieee_ras_icra today, summarizing my recent thoughts on sim2real.
If you are interested: 2pm, May 23 @ room 302.
Want to scale robot data with simulation, but don’t know how to get large numbers of realistic, diverse, and task-relevant scenes?
Our solution:
➊ Pretrain on broad procedural scene data
➋ Steer generation toward downstream objectives
🌐 steerable-scene-generation.github.io
🧵1/8
RL and post-training play a central role in giving language models advanced reasoning capabilities, but many algorithmic and scientific questions remain unanswered.
Join us at FoPT @ COLT '25 to explore pressing emerging challenges and opportunities for theory to bring clarity.
RL and post-training play a central role in giving language models advanced reasoning capabilities, but many algorithmic and scientific questions remain unanswered.
Join us at FoPT @ COLT '25 to explore pressing emerging challenges and opportunities for theory to bring clarity.
Congrats to Andrea Bajcsy (@andrea_bajcsy) on receiving the NSF CAREER award! 👏
Her work: “Formalizing Open World Safety for Interactive Robots,” explores how robots make safe decisions beyond collision avoidance. Read about it and her education plans: loom.ly/59evuD0
Building AI systems is now a fragmented process spanning multiple organizations & entities.
In new work (w/ @aspenkhopkins@cen_sarah@andrew_ilyas@imstruckman@LVidegaray), we study the implications of these emerging networks → what we call *AI supply chains* 🧵
Before the (exciting) workshops on Sun, catch Vincent’s oral talk at the #ICLR2025 main conference on this paper today at 3:30pm, Hall 1 Apex!
And don’t forget to talk with the co-leads Vincent and @YiSu37328759 at the poster 10 a.m - 12:30 p.m Hall 3 + Hall 2B #558.
Before the (exciting) workshops on Sun, catch Vincent’s oral talk at the #ICLR2025 main conference on this paper today at 3:30pm, Hall 1 Apex!
And don’t forget to talk with the co-leads Vincent and @YiSu37328759 at the poster 10 a.m - 12:30 p.m Hall 3 + Hall 2B #558.
So excited for this!!!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**.
Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.
So excited for this!!!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**.
Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.
278 Followers 2K Following💙💙💙💙
#😊Love to travel and explore more beautiful landscapes
#😍Enjoying the best years of your life
#✌️ Be the best you can be
2K Followers 2K FollowingJunior fellow at the Society of Fellows at @Harvard and @iaifi_news fellow, incoming Assistant Professor at @Harvard and the @KempnerInst views my own
208 Followers 8K FollowingPassionate about AI 🤖, ML 🧠, AGI 🌐, ASI 🚀, and robotics 🤖.
Never lose hope in God's mercy 💫.
AI Engineer Microsoft
He studies at MIT.
Free Palestine 🇵🇸
187 Followers 426 FollowingProducts to enhance human agency
@southpkcommons, @buildexante, @AgencyFund
Past: Public Health @Meta, Head of ML @PureStorage
1K Followers 2K FollowingJR East Professor of Engineering. Head,Department of Civil and Environmental Engineering, Massachusetts Institute of Technology
4K Followers 507 FollowingResearcher @OpenAI, core member of GPT image generation and member of Sora video generation. PhD @MITEECS. I do world models, RL, and robotics.
4K Followers 921 FollowingAssistant Professor at UPenn. Research interests: Neural Scene Representation, Neural Rendering, Human Performance Modeling and Capture.
16K Followers 307 FollowingTeaching AI to see, model, and interact with our 3D world. Assistant Professor @ MIT, leading the Scene Representation Group (https://t.co/h5gvhLYrtw).
8K Followers 544 FollowingAssistant Professor of Computer Science @Columbia @ColumbiaCompSci, Postdoc from @Stanford @StanfordSVL, PhD from @MIT_CSAIL. #Robotics #Vision #Learning
21K Followers 269 FollowingPioneering the future of robotics since 1979. We’re transforming industries and everyday life through cutting-edge innovation and world-class education.
16K Followers 497 FollowingHarvard Professor.
Full stack ML and AI.
Co-director of the Kempner Institute for the Study of Artificial and Natural Intelligence.
38K Followers 485 FollowingDigital Geometer, Assoc. Prof. of Computer Science & Robotics @CarnegieMellon @SCSatCMU and member of the @GeomCollective. There are four lights.